Tag Archives: Amy Webb

Autonomous Assumptions

I’m writing about a recent post from futurist Amy Webb. Amy is getting very political lately which is a real turn-off for me, but she still has her ear to the rail of the future, so I will try to be more tolerant. Amy carried a paragraph from an article entitled, “If you want to trust a robot, look at how it makes decisions” from The Conversation, an eclectic “academic rigor, journalistic flair” blog site. The author, Michael Fisher, a Professor of Computer Science, at the University of Liverpool, says,

“When we deal with another human, we can’t be sure what they will decide but we make assumptions based on what we think of them. We consider whether that person has lied to us in the past or has a record for making mistakes. But we can’t really be certain about any of our assumptions as the other person could still deceive us.

Our autonomous systems, on the other hand, are essentially controlled by software so if we can isolate the software that makes all the high-level decisions – those decisions that a human would have made – then we can analyse the detailed working of these programs. That’s not something you can or possibly ever could easily do with a human brain.”

Fisher thinks that might make autonomous systems more trustworthy than humans. He says that by software analysis we can be almost certain that the software that controls our systems will never make bad decisions.

There is a caveat.

“The environments in which such systems work are typically both complex and uncertain. So while accidents can still occur, we can at least be sure that the system always tries to avoid them… [and] we might well be able to prove that the robot never intentionally means to cause harm.”

That’s comforting. But OK, computers fly and land airplanes, they make big decisions about air traffic, they are driving cars with people in them, they control much of our power grid, and our missile defense, too. So why should we worry? It is a matter of definitions. We use terms when describing new technologies that clearly have different interpretations. How you define bad decisions? Fisher says,

“We are clearly moving on from technical questions towards philosophical and ethical questions about what behaviour we find acceptable and what ethical behaviour our robots should exhibit.”

If you have programmed an autonomous soldier to kill the enemy, is that ethical? Assuming that the Robocop can differentiate between good guys and bad guys, you have nevertheless opened the door to autonomous destruction. In the case of an autonomous soldier in the hands of a bad actor, you may be the enemy.

My point is this. It’s not necessarily the case that we understand how the software works and that it’s reliable, it may be more about who programmed the bot in the first place. In my graphic novel, The Lightstream Chronicles, there are no bad robots (I call them synths), but occasionally bad people get a hold of the good synths and make them do bad things. They call that twisting. It’s illegal, but of course, that doesn’t stop it. Criminals do it all the time.

You see, even in the future some things never change. In the words of Aldous Huxley,

“Technological progress has merely provided us with more efficient means for going backwards.”

 

Bookmark and Share

The right thing to do. Remember that idea?

I’ve been detecting some blowback recently regarding all the attention surrounding emerging AI, it’s near-term effect on jobs, and it’s long-term impact on humanity. Having an anticipatory mindset toward artificial intelligence is just the logical thing to do. As I have said before, designing a car without a braking system would be foolish. Anticipating the eventuality that you might need to slow down or stop the car is just good design. Nevertheless, there are a lot of people, important people in positions of power that think this is a lot of hooey. They must think that human ingenuity will address any unforeseen circumstances, that science is always benevolent, that stuff like AI is “a long way off,” that the benefits outweigh the downsides, and that all people are basically good. Disappointed I am that this includes our Treasury Secretary Steve Mnuchin. WIRED carried the story and so did my go-to futurist Amy Webb. In her newsletter Amy states,

“When asked about the future of artificial intelligence, automation and the workforce at an Axios event, this was Mnuchin’s reply: ‘It’s not even on our radar screen,’ he said, adding that significant workforce disruption due to AI is ‘50 to 100’ years away. ‘I’m not worried at all’”

Sigh! I don’t care what side of the aisle you’re on, that’s just plain naive. Turning a blind eye to potentially transformative technologies is also dangerous. Others are skeptical of any regulation (perhaps rightly so) that stifles innovation and progress. But safeguards and guidelines are not that. They are well-considered recommendations that are designed to protect while facilitating research and exploration. On the other side of the coin, they are also not laws, which means that if you don’t want to or don’t care to, you don’t have to follow them.

Nevertheless, I was pleased to see a relatively comprehensive set of AI principles that emerged from the Asilomar Conference that I blogged about a couple of weeks ago. The 2017 Asilomar conference organized by The Future of Life Institute,

“…brought together an amazing group of AI researchers from academia and industry, and thought leaders in economics, law, ethics, and philosophy for five days dedicated to beneficial AI.”

The gathering generated the Asilomar AI Principles, a remarkable first step on the eve of an awesome technological power. None of these people, from the panel I highlighted in the last blog, are anxious for regulation, but at the same time, they are aware of the enormous potential for bad actors to undermine whatever beneficial aspects of the technology might surface. Despite my misgivings, an AGI is inevitable. Someone is going to build it, and someone else will find a way to misuse it.

There are plenty more technologies that pose questions. One is nanotechnology. Unlike AI, Hollywood doesn’t spend much time painting nanotechnological dystopias, perhaps that along with the fact that they’re invisible to the naked eye, lets the little critters slip under the radar. While researching a paper for another purpose, I decided to look into nanotechnology to see what kinds of safeguards and guidelines are in place to deal with that rapidly emerging technology. There are clearly best practices by reputable researchers, scientists, and R&D departments but it was especially disturbing to find out that none of these are mandates. Especially since there are thousands of consumer products that use nanotechnology including food, cosmetics, clothing, electronics, and more. A nanometer is very small. Nanotech concerns itself with creations that exist in the 100nm range and below, roughly 7,500 times smaller than a human hair. In the Moore’s Law race, nanothings are the next frontier in cramming data onto a computer chip, or implanting them into our brains or living cells. However, due to their size, nanoparticles can also be inhaled, absorbed into the skin, flushed into the water supply and leeched into the soil. We don’t know what happens if we aggregate a large number of nanoparticles or differing combinations of nanoparticles in our body. We don’t even know how to test for it. And, get ready. Currently, there are no regulations. That means manufacturers do not need to disclose it, and there are no laws to protect the people who work with it. Herein, we have a classic example of bad decisions in the present that make for worse futures. Imagine the opposite: Anticipation of what could go wrong and sound industry intervention at a scale that pre-empts government intervention or the dystopian scenarios that the naysayers claim are impossible.

Bookmark and Share

Disruption. Part 2.

 

Last week I discussed the idea of technological disruption. Essentially, they are innovations that make fundamental changes in the way we work or live. In turn, these changes affect culture and behavior. Issues of design and culture are the stuff that interests me and my research: how easily and quickly our practices change as a result of the way we enfold technology. The advent of the railroad, mass produced automobiles, radio, then television, the Internet, and the smartphone all qualify as disruptions.

Today, technology advances more quickly. Technological development was never a linear idea, but because most of the tech advances of the last century were at the bottom of the exponential curve, we didn’t notice them. New technologies that are under development right now are going to being realized more quickly (especially the ones with big funding), and because of the idea of convergence, (the intermixing of unrelated technologies) their consequences will be less predictable.

One of my favorite futurists is Amy Webb whom I have written about before. In her most recent newsletter, Amy reminds us that the Internet was clunky and vague long before it was disruptive. She states,

“However, our modern Internet was being built without the benefit of some vital voices: journalists, ethicists, economists, philosophers, social scientists. These outside voices would have undoubtedly warned of the probable rise of botnets, Internet trolls and Twitter diplomacy––would the architects of our modern internet have done anything differently if they’d confronted those scenarios?”

Amy inadvertently left out the design profession, though I’m sure she will reconsider after we chat. Indeed, it is the design profession that is a key contributor to transformative tech and design thinkers, along with the ethicists and economists can help to visualize and reframe future visions.

Amy thinks that voice will be the next transformation will be our voice,

“From here forward, you can be expected to talk to machines for the rest of your life.”

Amy is referring to technologies like Alexa, Siri, Google, Cortana, and something coming soon called Bixby. The voices of these technologies are, of course, only the window dressing for artificial intelligence. But she astutely points out that,

“…we also know from our existing research that humans have a few bad habits. We continue to encode bias into our algorithms. And we like to talk smack to our machines. These machines are being trained not just to listen to us, but to learn from what we’re telling them.”

Such a merger might just be the mix of any technology (name one) with human nature or the human condition: AI meets Mike who lives across the hall. AI becoming acquainted with Mike may have been inevitable, but the fact that Mike happens to be a jerk was less predictable and so the outcome less so. The most significant disruptions of the future are going to come from the convergence of seemingly unrelated technologies. Sometimes innovation depends on convergence, like building an artificial human that will have to master a lot of different functions. Other times, convergence is accidental or at least unplanned. The engineers over at Boston Dynamics who are building those intimidating walking robots are focused a narrower set of criteria than someone creating an artificial human. Perhaps power and agility are their primary concern. Then, in another lab, there are technologists working on voice stress analysis, and in another setting, researchers are looking to create an AI that can choose your wardrobe. Somewhere else we are working on facial recognition or Augmented Reality or Virtual Reality or bio-engineering, medical procedures, autonomous vehicles or autonomous weapons. So it’s a lot like Harry meets Sally, you’re not sure what you’re going to get or how it’s going to work.

Digital visionary Kevin Kelly thinks that AI will be at the core of the next industrial revolution. Place the prefix “smart” in front of anything, and you have a new application for AI: a smart car, a smart house, a smart pump. These seem like universally useful additions, so far. But now let’s add the same prefix to the jobs you and I do, like a doctor, lawyer, judge, designer, teacher, or policeman. (Here’s a possible use for that ominous walking robot.) And what happens when AI writes better code than coders and decides to rewrite itself?

Hopefully, you’re getting the picture. All of this underscores Amy Webb’s earlier concerns. The ‘journalists, ethicists, economists, philosophers, social scientists’ and designers are rarely in the labs where the future is taking place. Should we be doing something fundamentally differently in our plans for innovative futures?

Side note: Convergence can happen in a lot of ways. The parent corporation of Boston Dynamics is X. I’ll use Wikipedia’s definition of X: “X, an American semi-secret research-and-development facility founded by Google in January 2010 as Google X, operates as a subsidiary of Alphabet Inc.”

Bookmark and Share