Tag Archives: robo-ethics

Autonomous Assumptions

I’m writing about a recent post from futurist Amy Webb. Amy is getting very political lately which is a real turn-off for me, but she still has her ear to the rail of the future, so I will try to be more tolerant. Amy carried a paragraph from an article entitled, “If you want to trust a robot, look at how it makes decisions” from The Conversation, an eclectic “academic rigor, journalistic flair” blog site. The author, Michael Fisher, a Professor of Computer Science, at the University of Liverpool, says,

“When we deal with another human, we can’t be sure what they will decide but we make assumptions based on what we think of them. We consider whether that person has lied to us in the past or has a record for making mistakes. But we can’t really be certain about any of our assumptions as the other person could still deceive us.

Our autonomous systems, on the other hand, are essentially controlled by software so if we can isolate the software that makes all the high-level decisions – those decisions that a human would have made – then we can analyse the detailed working of these programs. That’s not something you can or possibly ever could easily do with a human brain.”

Fisher thinks that might make autonomous systems more trustworthy than humans. He says that by software analysis we can be almost certain that the software that controls our systems will never make bad decisions.

There is a caveat.

“The environments in which such systems work are typically both complex and uncertain. So while accidents can still occur, we can at least be sure that the system always tries to avoid them… [and] we might well be able to prove that the robot never intentionally means to cause harm.”

That’s comforting. But OK, computers fly and land airplanes, they make big decisions about air traffic, they are driving cars with people in them, they control much of our power grid, and our missile defense, too. So why should we worry? It is a matter of definitions. We use terms when describing new technologies that clearly have different interpretations. How you define bad decisions? Fisher says,

“We are clearly moving on from technical questions towards philosophical and ethical questions about what behaviour we find acceptable and what ethical behaviour our robots should exhibit.”

If you have programmed an autonomous soldier to kill the enemy, is that ethical? Assuming that the Robocop can differentiate between good guys and bad guys, you have nevertheless opened the door to autonomous destruction. In the case of an autonomous soldier in the hands of a bad actor, you may be the enemy.

My point is this. It’s not necessarily the case that we understand how the software works and that it’s reliable, it may be more about who programmed the bot in the first place. In my graphic novel, The Lightstream Chronicles, there are no bad robots (I call them synths), but occasionally bad people get a hold of the good synths and make them do bad things. They call that twisting. It’s illegal, but of course, that doesn’t stop it. Criminals do it all the time.

You see, even in the future some things never change. In the words of Aldous Huxley,

“Technological progress has merely provided us with more efficient means for going backwards.”

 

Bookmark and Share

What does it mean to be human?

Earlier this week, just a couple of days after last weeks blog on robophobia, the MIT Technology Review (online) published an interview with AI futurist Martine Rothblatt. In a nutshell Ms. Rothblatt believes that conscious machines are inevitable, that evolution is no longer a theory but reality, that treating virtual beings differently than humans is tantamount to black slavery in the 19th century, and that the FDA should monitor and approve whatever hardware or software “effectively creates human consciousness.” Her core premise is something that I have covered in the blog before, and while I could spend the next few paragraphs debating some of these questionable assertions, it seems to me more interesting to ponder the fact that this discussion is going on at all.

I can find one point, that artificial consciousness is more or less inevitable, on which I agree with Rothblatt. What the article underscores is the inevitability that, “technology moves faster than politics, moves faster than policy, and often faster than ethics”1. Scarier yet is the idea that the FDA, (the people who approved bovine growth hormone) would be in charge of determining the effective states of consciousness.

All of this points to the fact that technology and science are on the cusp of a few hundred potentially life changing breakthroughs and there are days when, aside from Martine Rothblatt, no one seems to be paying attention. We need more minds and more disciplines in the discussion now so that as Rothblatt says, we don’t “…spend hundreds of years trying to dig ourselves out.” It’s that, or this will be just another example of the folly of our shortsightedness.

1.Wood, David. “The Naked Future — A World That Anticipates Your Every Move.” YouTube. YouTube, 15 Dec. 2013. Web. 13 Mar. 2014.

Bookmark and Share

Social discrimination—against robots. Is it possible?

As we know if you follow the blog, The Lightstream Chronicles is set in the year is 2159. Watching the current state of technology, the date has become increasingly uncomfortable. As I have blogged previously, this is a date that I chose primarily to justify the creation of a completely synthetic human brain capable of critical thinking, learning, logic, self-awareness and the full range of emotions. The only missing link would be a soul. Yet the more I see the exponential rate of technological advancement, the more I think we will arrive at this point probably 50 to 60 years sooner than that. Well, at least I won’t have to endure the critiques of how wrong I was.

As the story has shown, the level of artificial intelligence is quite literally, with the exception of a soul, Almost Human. (A term I coined at least two years before the television series of the same name). The social dilemma is whether we should treat them as human, with their human emotions and intelligence, are they entitled to the same rights as their human counterparts (that are nearly synthetic)? Do we have the right to make them do what we would not ask a human to do? Do we have the right to turn them off when we are finished with them? I wrote more about this in a blog some 50 pages ago regarding page 53 of Season 2.

Societally, though most have embraced the technology, convenience and companionship that synthetic humans provide, there is a segment that is not as impressed. They cite the extensive use of synths for crime and perversion and what many consider the disappearance of human to human contact. The pro-synthetic majority have branded them robophobes.

As the next series of episodes evolve we will see a pithy discussion between the human Kristin Broulliard and the synthetic Keiji-T. In many respects, Keiji is the superior intellect with capabilities and protocols that far exceed even the most enhanced humans. Indeed, there is an air of tension. Is she jealous? Does she feel threatened? Will she hold her own?

Bookmark and Share