One of my students is writing a paper on the rise of baby monitors and how technology has changed what and how we monitor. In the 80s, it the baby monitor was essentially a walkie-talkie in the “on” position. About all you could do is listen to breathing in another room. Today features that link to your smartphone include Bluetooth, night vision, motion detection, cloud storage and pulse oximetry.
I started thinking about what point in a child’s life a parent might stop monitoring. Most day care centers now allow remote login for parents to watch what their toddlers are up to and as they get older and have a smartphone (for emergencies, of course, )parents also have the ability to track their location. According to 2016 study by the Pew Research Center, “[…]parents today report taking a number of steps to influence their child’s digital behavior, from checking up on what their teen is posting on social media to limiting the amount of time their child spends in front of various screens.”
Having raised kids in the digital age, to me, this makes perfect sense. There are lots of dark alleys in the digital realm that can be detrimental to young eyes. Of course, when we realized that they were going off to college, it made some sense to believe that becoming an adult meant being responsible for your behavior. Needless to say, there are a lot of painful lessons on this journey.
Some say that the world is becoming an increasingly dangerous place. Should monitoring be something we become accustomed to, even for ourselves, all the time? What types of technologies might we accept to enable this? When should it stop? When do we need to know, and about whom?
There is no such thing as future proof anything, of course, so I use the term to refer to evidence that a current idea is becoming more and more probable of something we will see in the future. The evidence I am talking about surfaced in a FastCo article this week about biohacking and the new frontier of digital implants. Biohacking has a loose definition and can reference using genetic material without regard to ethical procedures, to DIY biology, to pseudo-bioluminescent tattoos, to body modification for functional enhancement—see transhumanism. Last year, my students investigated this and determined that a society willing to accept internal implants was not a near-future scenario. Nevertheless, according to FastCo author Steven Melendez,
“a survey released by Visa last year that found that 25% of Australians are ‘at least slightly interested’ in paying for purchases through a chip implanted in their bodies.”
Melendez goes on to describe a wide variety of implants already in use for medical, artistic and personal efficiency and interviews Tim Shank, president of a futurist group called TwinCities+. Shank says,
“[For] people with Android phones, I can just tap their phone with my hand, right over the chip, and it will send that information to their phone..”
The popularity of body piercings and tattoos— also once considered as invasive procedures—has skyrocketed. Implantable technology, especially as it becomes more functionally relevant could follow a similar curve.
I saw this coming some years ago when writing The Lightstream Chronicles. The story, as many of you know, takes place in the far future where implantable technology is mundane and part of everyday life. People regulate their body chemistry access the Lightstream (the evolved Internet) and make “calls” using their fingertips embedded with Luminous Implants. These future implants talk directly to implants in the brain, and other systemic body centers to make adjustments or provide information.