Tag Archives: bad actors

Autonomous Assumptions

I’m writing about a recent post from futurist Amy Webb. Amy is getting very political lately which is a real turn-off for me, but she still has her ear to the rail of the future, so I will try to be more tolerant. Amy carried a paragraph from an article entitled, “If you want to trust a robot, look at how it makes decisions” from The Conversation, an eclectic “academic rigor, journalistic flair” blog site. The author, Michael Fisher, a Professor of Computer Science, at the University of Liverpool, says,

“When we deal with another human, we can’t be sure what they will decide but we make assumptions based on what we think of them. We consider whether that person has lied to us in the past or has a record for making mistakes. But we can’t really be certain about any of our assumptions as the other person could still deceive us.

Our autonomous systems, on the other hand, are essentially controlled by software so if we can isolate the software that makes all the high-level decisions – those decisions that a human would have made – then we can analyse the detailed working of these programs. That’s not something you can or possibly ever could easily do with a human brain.”

Fisher thinks that might make autonomous systems more trustworthy than humans. He says that by software analysis we can be almost certain that the software that controls our systems will never make bad decisions.

There is a caveat.

“The environments in which such systems work are typically both complex and uncertain. So while accidents can still occur, we can at least be sure that the system always tries to avoid them… [and] we might well be able to prove that the robot never intentionally means to cause harm.”

That’s comforting. But OK, computers fly and land airplanes, they make big decisions about air traffic, they are driving cars with people in them, they control much of our power grid, and our missile defense, too. So why should we worry? It is a matter of definitions. We use terms when describing new technologies that clearly have different interpretations. How you define bad decisions? Fisher says,

“We are clearly moving on from technical questions towards philosophical and ethical questions about what behaviour we find acceptable and what ethical behaviour our robots should exhibit.”

If you have programmed an autonomous soldier to kill the enemy, is that ethical? Assuming that the Robocop can differentiate between good guys and bad guys, you have nevertheless opened the door to autonomous destruction. In the case of an autonomous soldier in the hands of a bad actor, you may be the enemy.

My point is this. It’s not necessarily the case that we understand how the software works and that it’s reliable, it may be more about who programmed the bot in the first place. In my graphic novel, The Lightstream Chronicles, there are no bad robots (I call them synths), but occasionally bad people get a hold of the good synths and make them do bad things. They call that twisting. It’s illegal, but of course, that doesn’t stop it. Criminals do it all the time.

You see, even in the future some things never change. In the words of Aldous Huxley,

“Technological progress has merely provided us with more efficient means for going backwards.”

 

Bookmark and Share

The right thing to do. Remember that idea?

I’ve been detecting some blowback recently regarding all the attention surrounding emerging AI, it’s near-term effect on jobs, and it’s long-term impact on humanity. Having an anticipatory mindset toward artificial intelligence is just the logical thing to do. As I have said before, designing a car without a braking system would be foolish. Anticipating the eventuality that you might need to slow down or stop the car is just good design. Nevertheless, there are a lot of people, important people in positions of power that think this is a lot of hooey. They must think that human ingenuity will address any unforeseen circumstances, that science is always benevolent, that stuff like AI is “a long way off,” that the benefits outweigh the downsides, and that all people are basically good. Disappointed I am that this includes our Treasury Secretary Steve Mnuchin. WIRED carried the story and so did my go-to futurist Amy Webb. In her newsletter Amy states,

“When asked about the future of artificial intelligence, automation and the workforce at an Axios event, this was Mnuchin’s reply: ‘It’s not even on our radar screen,’ he said, adding that significant workforce disruption due to AI is ‘50 to 100’ years away. ‘I’m not worried at all’”

Sigh! I don’t care what side of the aisle you’re on, that’s just plain naive. Turning a blind eye to potentially transformative technologies is also dangerous. Others are skeptical of any regulation (perhaps rightly so) that stifles innovation and progress. But safeguards and guidelines are not that. They are well-considered recommendations that are designed to protect while facilitating research and exploration. On the other side of the coin, they are also not laws, which means that if you don’t want to or don’t care to, you don’t have to follow them.

Nevertheless, I was pleased to see a relatively comprehensive set of AI principles that emerged from the Asilomar Conference that I blogged about a couple of weeks ago. The 2017 Asilomar conference organized by The Future of Life Institute,

“…brought together an amazing group of AI researchers from academia and industry, and thought leaders in economics, law, ethics, and philosophy for five days dedicated to beneficial AI.”

The gathering generated the Asilomar AI Principles, a remarkable first step on the eve of an awesome technological power. None of these people, from the panel I highlighted in the last blog, are anxious for regulation, but at the same time, they are aware of the enormous potential for bad actors to undermine whatever beneficial aspects of the technology might surface. Despite my misgivings, an AGI is inevitable. Someone is going to build it, and someone else will find a way to misuse it.

There are plenty more technologies that pose questions. One is nanotechnology. Unlike AI, Hollywood doesn’t spend much time painting nanotechnological dystopias, perhaps that along with the fact that they’re invisible to the naked eye, lets the little critters slip under the radar. While researching a paper for another purpose, I decided to look into nanotechnology to see what kinds of safeguards and guidelines are in place to deal with that rapidly emerging technology. There are clearly best practices by reputable researchers, scientists, and R&D departments but it was especially disturbing to find out that none of these are mandates. Especially since there are thousands of consumer products that use nanotechnology including food, cosmetics, clothing, electronics, and more. A nanometer is very small. Nanotech concerns itself with creations that exist in the 100nm range and below, roughly 7,500 times smaller than a human hair. In the Moore’s Law race, nanothings are the next frontier in cramming data onto a computer chip, or implanting them into our brains or living cells. However, due to their size, nanoparticles can also be inhaled, absorbed into the skin, flushed into the water supply and leeched into the soil. We don’t know what happens if we aggregate a large number of nanoparticles or differing combinations of nanoparticles in our body. We don’t even know how to test for it. And, get ready. Currently, there are no regulations. That means manufacturers do not need to disclose it, and there are no laws to protect the people who work with it. Herein, we have a classic example of bad decisions in the present that make for worse futures. Imagine the opposite: Anticipation of what could go wrong and sound industry intervention at a scale that pre-empts government intervention or the dystopian scenarios that the naysayers claim are impossible.

Bookmark and Share

The genius panel has some serious concerns.

Occasionally in preparing this blog, there are troughs in the technology newsfeed. But not now, and maybe never again. So it is with technology that accelerates exponentially. This idea, by the way, is a concept of which I will no longer try to convince my readers. I’m going to stop referencing why Kurzweil’s theorem, that technology advances exponentially is no longer a theorem and just move forward with the assumption that you know that it is. If you don’t agree,  then scout backwards—probably six months of previous blogs—and you’ll be on the same page. From here on, technology advances exponentially! With that being said, we are also no longer at the base of the exponential curve. We are beginning a steep climb.

Last week I highlighted Kurzweil’s upgraded prediction on the Singularity (12 years). I agree, though now I think he may be underselling things. It could easily arrive before that.

Today’s blog comes from a hot tip from one of my students. At the beginning of each semester, I always turn my students on to the idea of GoogleAlerts. It works like this: You tell Google to send you anything and everything on whatever topic interests you. Then, anytime there is news online that fits your topic, you get an email with a list of links from Google. The emails can be inundating so choose your search wisely. At any rate, my student who drank the GoogleAlert kool-aid sent me a link to a panel discussion that took place in January of 2017. The panel convened at something called Beneficial AI 2017 in Asilomar, California. And what a panel it was. Get this: Bart Selman (Cornell), David Chalmers (NYU), Elon Musk (Tesla, SpaceX), Jaan Tallinn (CSER/FLI), Nick Bostrom (FHI), Ray Kurzweil (Google), Stuart Russell (Berkeley), Sam Harris, Demis Hassabis (DeepMind). Sam is a philosopher, author, neuroscientist and noted secularist. I’ve cited nearly all of these characters before in blogs or research papers, so to see them all on one panel was, for me, amazing.

L to R: Elon Musk, Stuart Russell , Bart Selman, Ray Kurzweil, David Chalmers, Nick Bostrom, Demis Hassabis, Sam Harris, Jaan Tallinn.

 

Why were they there? The Future of Life Institute (FLI) organized the BAI 2017 event:

“In our sequel to the 2015 Puerto Rico AI conference, we brought together an amazing group of AI researchers from academia and industry, and thought leaders in economics, law, ethics, and philosophy for five days dedicated to beneficial AI.”

FLI works together with CSER. (The Centre for the Study of Existential Risk). I confess that I was not aware of either organization, but this is encouraging. For example, CSER’s mission is stated as

“[…]within the University of Cambridge dedicated to the study and mitigation of human extinction-level risks that may emerge from technological advances and human activity.”

FLI describes themselves thus:

“We are a charity and outreach organization working to ensure that tomorrow’s most powerful technologies are beneficial for humanity […] We are currently focusing on keeping artificial intelligence beneficial and we are also exploring ways of reducing risks from nuclear weapons and biotechnology.”

Both organizations are loaded with scientists and technologists including Steven Hawking, Bostrom, and Musk.

The panel of genius’ got off to a rocky start because there weren’t enough microphones to go around. Duh. But then things got interesting. The topic of safe AI or what these fellows refer to as AGI, Artificial General Intelligence, is a deep well fraught with promise and doom. The encouraging thing is that these organizations realize the potential for either, the discomforting thing is that they’re genuinely concerned.

As I have discussed before, this race to a superintelligence which Kurzweil moved up to 2029 a few weeks ago, is moving full speed ahead and it is climbing in a steep exponential incline. It is likely that we will be able to build it long before we have figured out how to keep it from destroying us. I’m on record as saying that even the notion of a superintelligence is an error in judgment. If what you want to do is cure disease, aging, and save the planet, why not stop short of full-tilt superintelligence. Surely you get a very, very, very intelligent AI to give you what you want and go no further. After hearing the panel discussion, however, I see this as naive. As Kurzweil stated in the discussion,

“…there really isn’t a foolproof technical solution to this… If you have an AI that is more intelligent than you and is out for your destruction, it’s out for the world’s destruction, and there is no other AI that is superior to it, that’s a bad situation. So that’s the specter […] Imagine that we’ve done our job perfectly, and we’ve created the most safe, beneficial AI possible, but we’ve let the political system become totalitarian and evil, either an evil world government or just a portion of the globe, that is that way, it’s not going to work out well. So part of the struggle is in the area of politics and policy to have the world reflect the values we want to achieve. Human AI is by definition at human levels and therefore is human. So the issue is, ‘How do we make humans ethical?’ is the same issue as, ‘How we make AIs that are at human level, ethical?’”

So there we have the problem of human nature, again. If we can’t fix ourselves if we can’t even agree on what’s broken, how can we build a benevolent god? Fortunately, brilliant minds are honestly concerned about this but that doesn’t mean they’re going to put on the brakes. It was stated in full agreement by the panel: a superintelligence is inevitable. If we don’t build it, someone else will.

It is also safe to assume that our super ethical AI won’t have the same ethics as someone else’s AI. Hence, Kurzweil’s specter. I could turn this into an essay, but I’ll stop here for now. What do you think?

Bookmark and Share