Tag Archives: roboethics

Autonomous Assumptions

I’m writing about a recent post from futurist Amy Webb. Amy is getting very political lately which is a real turn-off for me, but she still has her ear to the rail of the future, so I will try to be more tolerant. Amy carried a paragraph from an article entitled, “If you want to trust a robot, look at how it makes decisions” from The Conversation, an eclectic “academic rigor, journalistic flair” blog site. The author, Michael Fisher, a Professor of Computer Science, at the University of Liverpool, says,

“When we deal with another human, we can’t be sure what they will decide but we make assumptions based on what we think of them. We consider whether that person has lied to us in the past or has a record for making mistakes. But we can’t really be certain about any of our assumptions as the other person could still deceive us.

Our autonomous systems, on the other hand, are essentially controlled by software so if we can isolate the software that makes all the high-level decisions – those decisions that a human would have made – then we can analyse the detailed working of these programs. That’s not something you can or possibly ever could easily do with a human brain.”

Fisher thinks that might make autonomous systems more trustworthy than humans. He says that by software analysis we can be almost certain that the software that controls our systems will never make bad decisions.

There is a caveat.

“The environments in which such systems work are typically both complex and uncertain. So while accidents can still occur, we can at least be sure that the system always tries to avoid them… [and] we might well be able to prove that the robot never intentionally means to cause harm.”

That’s comforting. But OK, computers fly and land airplanes, they make big decisions about air traffic, they are driving cars with people in them, they control much of our power grid, and our missile defense, too. So why should we worry? It is a matter of definitions. We use terms when describing new technologies that clearly have different interpretations. How you define bad decisions? Fisher says,

“We are clearly moving on from technical questions towards philosophical and ethical questions about what behaviour we find acceptable and what ethical behaviour our robots should exhibit.”

If you have programmed an autonomous soldier to kill the enemy, is that ethical? Assuming that the Robocop can differentiate between good guys and bad guys, you have nevertheless opened the door to autonomous destruction. In the case of an autonomous soldier in the hands of a bad actor, you may be the enemy.

My point is this. It’s not necessarily the case that we understand how the software works and that it’s reliable, it may be more about who programmed the bot in the first place. In my graphic novel, The Lightstream Chronicles, there are no bad robots (I call them synths), but occasionally bad people get a hold of the good synths and make them do bad things. They call that twisting. It’s illegal, but of course, that doesn’t stop it. Criminals do it all the time.

You see, even in the future some things never change. In the words of Aldous Huxley,

“Technological progress has merely provided us with more efficient means for going backwards.”

 

Bookmark and Share

What does it mean to be human?

Earlier this week, just a couple of days after last weeks blog on robophobia, the MIT Technology Review (online) published an interview with AI futurist Martine Rothblatt. In a nutshell Ms. Rothblatt believes that conscious machines are inevitable, that evolution is no longer a theory but reality, that treating virtual beings differently than humans is tantamount to black slavery in the 19th century, and that the FDA should monitor and approve whatever hardware or software “effectively creates human consciousness.” Her core premise is something that I have covered in the blog before, and while I could spend the next few paragraphs debating some of these questionable assertions, it seems to me more interesting to ponder the fact that this discussion is going on at all.

I can find one point, that artificial consciousness is more or less inevitable, on which I agree with Rothblatt. What the article underscores is the inevitability that, “technology moves faster than politics, moves faster than policy, and often faster than ethics”1. Scarier yet is the idea that the FDA, (the people who approved bovine growth hormone) would be in charge of determining the effective states of consciousness.

All of this points to the fact that technology and science are on the cusp of a few hundred potentially life changing breakthroughs and there are days when, aside from Martine Rothblatt, no one seems to be paying attention. We need more minds and more disciplines in the discussion now so that as Rothblatt says, we don’t “…spend hundreds of years trying to dig ourselves out.” It’s that, or this will be just another example of the folly of our shortsightedness.

1.Wood, David. “The Naked Future — A World That Anticipates Your Every Move.” YouTube. YouTube, 15 Dec. 2013. Web. 13 Mar. 2014.

Bookmark and Share

Design fiction and roboethics: Are we ready to be god? The Lightstream Chronicles online graphic novel continues.

p53: Assaulted by a non-human?

There was a time when crimes were simpler. Humans committed crimes against other humans — not so simple any more. In 2159 you have the old fashioned mano a mano but you also have human against synthetic, and synthetic against human. There are creative variations as well.

No sooner than the first lifelike robots became commercially available in the late 2020’s there were issues of ethics and misuse. The problems escalated faster than the robotics industry had conceived possible,” problems inherent in the possible emergence of human function in the robot: like consciousness, free will, self consciousness, sense of dignity, emotions, and so on. Consequently, this is why we have not examined problems — debated in literature — like the need not to consider robot as our slaves, or the need to guarantee them the same respect, rights and dignity we owe to human workers.”1 In the 21st century many of the concerns within the scientific community centered around what we as humans might do to infringe upon the “rights” of the robot. And though the earliest treatises in roboethics included more fundamental questions regarding the ethics of the robots’ designers, manufacturers and users, now in the role of the creator-god they did not foresee how “unprepared” for that responsibility we were and how quickly humans would pervert the robot for numerous “unethical” uses, including but not limited to their modification for crime and perversion.

Nevertheless, more than 100 years later, when synthetic human production is at the highest levels in history, the questions of ethics in both humans and their creations remain a significant point of controversy. As the 2007 Roboethics Roadmap concluded, “It is absolutely clear that without a deep rooting of Roboethics in society, the premises for the implementation of an artificial ethics in the robots’ control systems will be missing.”

After these initial introductions of humanoid robots, now seen as almost comically primitive, the technology, and in turn the reasoning, emotions, personality and realism became progressively more sophisticated. Likewise their implementations became progressively more like the society that manufactured them. They became images of their creators both benevolent and malevolent.

Schematic1Longm
Humans building themselves. Better?

A series of laws were enacted to prevent humanoid robots to be used for criminal intent, yet at the same time military interests were fully pursuing dispassionate automated humanoid robots with the express intent of extermination. It was truly a time of paradoxical technologies. To further complicate the issue were ongoing debates on the nature of what was considered “criminal”. Could a robot become a criminal without human intervention? Is something criminal if it is consensual?

These issues ultimately evolved into complex social, economic, political, and legal entanglement that included heavy government regulation and oversight where such was achievable. As this complexity and infrastructure grew to accommodate the constantly expanding technology, the greatest promise and challenges came in almost 100 years after those first humanoid robots when virtual human brains were being grown in the lab, the heretofore readily identifiable differences between synthetic humans and real human gradually began to disappear. The similarities were so shocking and so undetectable that new legislation was enacted to restrict the use of virtual humans. The classification system was enacted to insure visible distinctions for the vast variety of social synthetics.

Still, the concerns of the very first Roboethics Roadmap were confirmed even 150 years into the future. Synthetics were still abused, and used to perpetrate crimes. Their virtual humanness only added a element of complexity, reality and in some cases, horror to the creativity of how they could be used.

 1 Euron Roboethics Roadmap

 

Bookmark and Share