Tag Archives: Simon Parkin

The robo-apocalypse. Part 1.

Talk of robot takeovers is all the rage right now.

I’m good with this because the evidence is out there that robots will continue to get smarter and smarter but the human condition, being what it is, we will continue to do stupid s**t. Here are some examples from the news this week.

1. The BBC reported this week that South Korea has deployed something called The Super aEgis II, a 50-caliber robotic machine gun that knows who is an enemy and who isn’t. At least that’s the plan. The company that built and sells the Super aEgis is DoDAAM. Maybe that is short for do damage. The BBC astutely notes,

“Science fiction writer Isaac Asimov’s First Law of Robotics, that ‘a robot may not injure a human being or, through inaction, allow a human being to come to harm’, looks like it will soon be broken.”

Asimov was more than a great science-fiction writer, he was a Class A futurist. He clearly saw the potential for us to create robots that were smarter and more powerful than we are. He figured there should be some rules. Asimov used the kind of foresight that responsible scientists, technologists and designers should be using for everything we create. As the article continues, Simon Parkin of the BBC quotes Yangchan Song, DoDAAM’s managing director of strategy planning.

“Automated weapons will be the future. We were right. The evolution has been quick. We’ve already moved from remote control combat devices, to what we are approaching now: smart devices that are able to make their own decisions.”

Or in the words of songwriter Donald Fagen,

“A just machine to make big decisions
Programmed by fellows with compassion and vision…1

Relax. The world is full of these fellows. Right now the weapon/robot is linked to a human who gives the OK to fire, and all customers who purchased the 30 units thus far have opted for the human/robot interface. But the company admits,

“If someone came to us wanting a turret that did not have the current safeguards we would, of course, advise them otherwise, and highlight the potential issues,” says Park. “But they will ultimately decide what they want. And we develop to customer specification.”

A 50 caliber round. Heavy damage.
A 50 caliber round. Heavy damage.

They are currently working on the technology that will help their machine make the right decision on its own., but the article cites several academics and researchers who see red flags waving. Most concur that teaching a robot right from wrong is no easy task. Compound the complexity because the fellows who are doing the programming don’t always agree on these issues.

Last week I wrote about Google’s self-driving car. Of course, this robot has to make tough decisions too. It may one day have to decide whether to hit the suddenly appearing baby carriage, the kid on the bike, or just crash the vehicle. In fact, Parkin’s article brings Google into the picture as well, quoting Colin Allen,

“Google admits that one of the hardest problems for their programming is how an automated car should behave at a four-way stop sign…”

Humans don’t do such a good job at that either. And there is my problem with all of this. If the humans who are programming these machines are still wrestling with what is ethically right or wrong, can a robot be expected to do better. Some think so. Over at DoDAMM,

“Ultimately, we would probably like a machine with a very sound basis to be able to learn for itself, and maybe even exceed our abilities to reason morally.”

Based on what?

Next week: Drones with pistols.

 

1. Donald Fagen, IGY From the Night Fly album. 1982
Bookmark and Share