What does it mean to be human?

Earlier this week, just a couple of days after last weeks blog on robophobia, the MIT Technology Review (online) published an interview with AI futurist Martine Rothblatt. In a nutshell Ms. Rothblatt believes that conscious machines are inevitable, that evolution is no longer a theory but reality, that treating virtual beings differently than humans is tantamount to black slavery in the 19th century, and that the FDA should monitor and approve whatever hardware or software “effectively creates human consciousness.” Her core premise is something that I have covered in the blog before, and while I could spend the next few paragraphs debating some of these questionable assertions, it seems to me more interesting to ponder the fact that this discussion is going on at all.

I can find one point, that artificial consciousness is more or less inevitable, on which I agree with Rothblatt. What the article underscores is the inevitability that, “technology moves faster than politics, moves faster than policy, and often faster than ethics”1. Scarier yet is the idea that the FDA, (the people who approved bovine growth hormone) would be in charge of determining the effective states of consciousness.

All of this points to the fact that technology and science are on the cusp of a few hundred potentially life changing breakthroughs and there are days when, aside from Martine Rothblatt, no one seems to be paying attention. We need more minds and more disciplines in the discussion now so that as Rothblatt says, we don’t “…spend hundreds of years trying to dig ourselves out.” It’s that, or this will be just another example of the folly of our shortsightedness.

1.Wood, David. “The Naked Future — A World That Anticipates Your Every Move.” YouTube. YouTube, 15 Dec. 2013. Web. 13 Mar. 2014.

Bookmark and Share

Social discrimination—against robots. Is it possible?

As we know if you follow the blog, The Lightstream Chronicles is set in the year is 2159. Watching the current state of technology, the date has become increasingly uncomfortable. As I have blogged previously, this is a date that I chose primarily to justify the creation of a completely synthetic human brain capable of critical thinking, learning, logic, self-awareness and the full range of emotions. The only missing link would be a soul. Yet the more I see the exponential rate of technological advancement, the more I think we will arrive at this point probably 50 to 60 years sooner than that. Well, at least I won’t have to endure the critiques of how wrong I was.

As the story has shown, the level of artificial intelligence is quite literally, with the exception of a soul, Almost Human. (A term I coined at least two years before the television series of the same name). The social dilemma is whether we should treat them as human, with their human emotions and intelligence, are they entitled to the same rights as their human counterparts (that are nearly synthetic)? Do we have the right to make them do what we would not ask a human to do? Do we have the right to turn them off when we are finished with them? I wrote more about this in a blog some 50 pages ago regarding page 53 of Season 2.

Societally, though most have embraced the technology, convenience and companionship that synthetic humans provide, there is a segment that is not as impressed. They cite the extensive use of synths for crime and perversion and what many consider the disappearance of human to human contact. The pro-synthetic majority have branded them robophobes.

As the next series of episodes evolve we will see a pithy discussion between the human Kristin Broulliard and the synthetic Keiji-T. In many respects, Keiji is the superior intellect with capabilities and protocols that far exceed even the most enhanced humans. Indeed, there is an air of tension. Is she jealous? Does she feel threatened? Will she hold her own?

Bookmark and Share

Will computers be able to read your mind? Uh, yes.

As we see on Page 92 of The Lightstream Chronicles, synthetic human Keiji-T casts a sidelong glance at Detective Guren with a sort of, “What’s his problem?” look. But, in fact, there is really little question. This far into the future, what we know as the computer, will be ubiquitous computing—something that is embedded in the walls, the door handles, your coffee cup and your bodysuit. In other words, everything will have some level of monitoring, transmission or computing power already in the make up of the device.

For example: the walls of your apartment are active surfaces, they can become visual representations of whatever you are thinking, any transmissions you are receiving or constructs that you wish to create. Hence, if you want your office environment to be a courtyard in a small Tuscan village then the walls will comply, fixtures, tables or any other device can comply with the illusion. The data being transmitted to your mind will trigger sensations of air temperature, wind, olfactory cues (like olive trees), and sounds like children playing in the distance, or music from an upstairs room across the street. When you pick up a stylus or touch an interface, you also become part of the network. Literally everything is part of the mesh.

Rewind to the present day. How could this happen you may think, but think again. In your pocket or on your desk is probably a smart phone. On this phone is stored the meta data on everywhere you have been since you owned it. This is courtesy of something called location services, which is probably in the ON position for numerous apps. This data, when matched with the day and time projects a pattern of activity; where you are on Tuesdays at 8:00 AM, who you call on your way home from work, when you text, from where, and to whom.

When it comes to your preferences, your smart phone can tell what sites you visit (your interests), when you visit them (behavioral timing), and the intensity of your interest (time allotted). If you are interacting with others, their data overlaps with yours. If you are not actually interacting, your contact list is a perfect tool for cross referencing. Now the data has tangents. Already we have enough information to predict where you are on Tuesdays, and who you are likely to be with. If you have recently used your smart phone to debit a venti red-eye, we can determine if you are caffeinated. If you have purchased two, then your friend is likely caffeinated as well. And that just scratches the surface.

Fast forward a hundred years or so and this sort of technology would be considered primitive. In an instant, a minor chip embedded in our brain could analyze all the public domain data on anyone we meet and make an assessment of their intentions.

So as Keiji-T gives Detective Guren the look, it’s safe to say he knows exactly what he’s thinking.

Bookmark and Share

Synthetic emotions? Sounds like science fiction but it’s not.

If you think the idea of feeling, emotive synthetic humans is pure science fiction fantasy, well, you’re wrong.

As we see on page 91 of The Lightstream Chronicles, Toei-N is quite in a lather about having met Chancellor Zhang in person. Not surprising; she is probably the most famous, if not the most important person in the world in 2159. The figurehead of the largest nation on the planet she oversees the governing influences of billions of people. An emotional response is consistent so I can see why someone might be just a bit nervous about meeting her, especially unexpectedly. But, let’s not forget that Toei-N is an N-Class synthetic—not human. Typical science fiction you might think, but you might want to think again.

If it was purely the stuff of sci-fi, then you might not see quite so many scholars with it on their Google Alerts. For example, there is the International Journal of Synthetic Emotions. Published semi-annually, the IJSE describes itself thus:

The International Journal of Synthetic Emotions (IJSE) covers the main issues relevant to the generation, expression, and use of synthetic emotions in agents, robots, systems, and devices. Providing unique, interdisciplinary research from across the globe, this journal covers a wide range of topics such as emotion recognition, sociable robotics, and emotion-based control systems useful to field practitioners, researchers, and academicians.

Tooling around Amazon, you could stumble upon the Handbook of Research on Synthetic Emotions and Sociable Robotics: New Applications in Affective Computing and Artificial Intelligence, by Jordi Vallverdu.

The technology that we often dismiss as science fiction is progressively becoming less so,  and though it may not be developed to the extent that we see in The Lightstream Chronicles, it’s fair to say that it just a matter of time.

When futurist, inventor and singularity forecaster Ray Kurzweil reviewed the Spike Jonze film, Her, he placed the reasonable plausibility of the Samantha character at 2029, “when the leap to human level AI would be reasonably believable.” Of course, in the movie, Samantha does not have a body such as Toei but Kurzweil says this is a minor detail. “The idea that AIs will not have bodies is a misconception. If she can have a voice, she can have a body. ” Kurzweil is also a proponent of the idea that technology develops exponentially not in any kind of linear fashion. ” If human-level AI is feasible around 2029, it will, according to my law of accelerating returns, be roughly doubling in capability each year.”1

His theory is hard to argue with and the smart phone is my perennial example. The Motorola Razr was developed in 2003. In just eleven years the iPhone 6 is a thousand times more powerful, and if we buy the exponential theory, that should double in just a couple of years. Have you seen the Apple Watch?

 

The Motorola Rasr. 700 bucks in 2003.
The Motorola Rasr. 700 bucks in 2003.
1.http://www.kurzweilai.net/a-review-of-her-by-ray-kurzweil
Bookmark and Share