What did one AI say to the other AI?

I’e asked this question before, but this is an entirely new answer.

We may  never know.

Based on a plethora of recent media on artificial intelligence (AI), not only are there a lot of people working on it, but many of those on the leading edge are also concerned with what they don’t understand in the midst of their ominous new creation.

Amazon, DeepMind/Google, Facebook, IBM, and Microsoft teamed up to form the Partnership on AI.(1) Their charter talks about sharing information and being responsible. It includes all of the proper buzz words for a technosocial contract.

“…transparency, security and privacy, values and ethics, collaboration between people and AI systems, interoperability of systems, and of the trustworthiness, reliability, containment, safety, and robustness of the technology.”(2)

They are not alone in this concern, as the EU(3) is also working on AI guidelines and a set of rules on robotics.

Some of what makes them all a bit nervous is the way AI learns, the complexity of neural networks and inability to go back and see how the AI arrived at its conclusion. In other words, how do we know that its recommendation is the right one? Adding to that list is the discovery that AIs working together can create their own languages; languages we don’t speak or understand. In one case, at Facebook, researchers saw this happening and stopped it.

For me, it’s a little disconcerting that Facebook, a social media app is one of those corporations leading the charge and leading the research into AI. That’s a broad topic for another blog, but their underlying objective is to market to you. That’s how they make their money.

To be fair, that is at least part of the motivation for Amazon, DeepMind/Google, IBM, and Microsoft, as well. The better they know you, the more stuff they can sell you. Of course, there are also enormous benefits to medical research as well. Such advantages are almost always what these companies talk about first. AI will save your life, cure cancer and prevent crime.

So, it is somewhat encouraging to see that these companies on the forefront of AI breakthroughs are also acutely aware of how AI could go terribly wrong. Hence we see wording from the Partnership on AI, like

“…Seek out, support, celebrate, and highlight aspirational efforts in AI for socially benevolent applications.”

The key word here is benevolent. But the clear objective is to keep the dialog positive, and

“Create and support opportunities for AI researchers and key stakeholders, including people in technology, law, policy, government, civil liberties, and the greater public, to communicate directly and openly with each other about relevant issues to AI and its influences on people and society.”(2)

I’m reading between the lines, but it seems like the issue of how AI will influence people and society is more of an obligatory statement intended to demonstrate compassionate concern. It’s coming from the people who see huge commercial benefit from the public being at ease with the coming onslaught of AI intrusion.

In their long list of goals, the “influences” on society don’t seem to be a priority. For example, should they discover that particular AI has a detrimental effect on people, that their civil liberties are less secure, would they stop? Probably not.

At the rate that these companies are racing toward AI superiority, the unintended consequences for our society are not a high priority. While these groups are making sure that AI does not decide to kill us, I wonder if they are also looking at how the AI will change us and are those changes a good thing?

(1) https://www.fastcodesign.com/90132632/ai-is-inventing-its-own-perfect-languages-should-we-let-it

(2) https://www.partnershiponai.org/#

(3) http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-//EP//NONSGML%2BCOMPARL%2BPE-582.443%2B01%2BDOC%2BPDF%2BV0//EN

Other pertinent links:

https://www.fastcompany.com/3064368/we-dont-always-know-what-ai-is-thinking-and-that-can-be-scary

https://www.fastcodesign.com/90133138/googles-next-design-project-artificial-intelligence

Bookmark and Share

Watching and listening.

 

Pay no attention to Alexa, she’s an AI.

There was a flurry of reports from dozens of news sources (including CNN) last week that an Amazon Echo, (Alexa), called the police during a New Mexico incident of domestic violence. The alleged call began a SWAT standoff, and the victim’s boyfriend was eventually arrested. Interesting story, but after a fact-check, that could not be what happened. Several sources including the New York Times and WIRED debunked the story with details on how Alexa calling 911 is technologically impossible, at least for now. And although the Bernalillo, New Mexico County Sheriff’s Department swears to it, according to WIRED,

“Someone called the police that day. It just wasn’t Alexa..”

Even Amazon agrees from a spokesperson email,

“The receiving end would also need to have an Echo device or the Alexa app connected to Wi-Fi or mobile data, and they would need to have Alexa calling/messaging set up,”1

So it didn’t happen, but most agree, while it may be technologically impossible today, it probably won’t be for very long. The provocative side of the WIRED article proposed this thought:

“The Bernalillo County incident almost certainly had nothing to do with Alexa. But it presents an opportunity to think about issues and abilities that will become real sooner than you might think.”

On the upside, some see benefits from the ability of Alexa to intervene in a domestic dispute that could turn lethal, but they fear something called “false positives.” Could an off handed comment prompt Alexa to make a call to the police? And if it did would you feel as though Alexa had overstepped her bounds?

Others see the potential in suicide prevention. Alexa could calm you down or make suggestions for ways to move beyond the urge to die.

But as we contemplate opening this door, we need to acknowledge that we’re letting these devices listen to us 24/7 and giving them the permission to make decisions on our behalf whether we want them to or not. The WIRED article also included a comment from Evan Selinger of RIT (whom I’ve quoted before).

“Cyberservants will exhibit mission creep over time. They’ll take on more and more functions. And they’ll habituate us to become increasingly comfortable with always-on environments listening to our intimate spaces.”

These technologies start out as warm and fuzzy (see the video below) but as they become part of our lives, they can change us and not always for the good. This idea is something I contemplated a couple of years ago with my Ubiquitous Surveillance future. In this case, the invasion was not as a listening device but with a camera (already part of Amazon’s Echo Look). You can check that out and do your own provocation by visiting the link.

I’m glad that there are people like Susan Liautaud (who I wrote about last week) and Evan Selinger who are thinking about the effects of technology on society, but I still fear most of us take the stance of Dan Reidenberg, who is also quoted in the WIRED piece.

“‘I don’t think we can avoid this. This is where it is going to go. It is really about us adapting to that,” he says.’”

 

Nonsense! That’s like getting in the car with a drunk driver and then doing your best to adapt. Nobody is putting a gun to your head to get into the car. There are decisions to be made here, and they don’t have to be made after the technology has created seemingly insurmountable problems or intrusions in our lives. The companies that make them should be having these discussions now, and we should be invited to share our opinions.

What do you think?

 

  1. http://wccftech.com/alexa-echo-calling-911/
Bookmark and Share

Ethical tech.

Though I tinge most of my blogs with ethical questions, the last time I brought up this topic specifically on this was back in 2015. I guess I am ready to give it another go. Ethics is a tough topic. If we deal with this purely superficially, ethics would seem natural, like common sense, or the right thing to do. But if that’s the case, why do so many people do the wrong thing? Things get even more complicated if we move into institutionally complex issues like banking, or governing, technology, genetics, health care or national defense, just to name a few.

The last time I wrote about this, I highlighted Michael Sandel Professor of Philosophy and Government at Harvard’s Law School, where he teaches a wildly popular course called “Justice.” Then, I was glad to see that the big questions were still being addressed in in places like Harvard. Some of his questions then, which came from a FastCo article, were:

“Is it right to take from the rich and give to the poor? Is it right to legislate personal safety? Can torture ever be justified? Should we try to live forever? Buy our way to the head of the line? Create perfect children?”

These are undoubtedly important and prescient questions to ask, especially as we are beginning to confront technologies that make things which were formerly inconceivable or plain impossible, not only possible but likely.

So I was pleased to see last month, an op-ed piece in WIRED by Susan Liautaud founder of The Ethics Incubator. Susan is about as closely aligned to my tech concerns as anyone I have read. And she brings solid thinking to the issues.

“Technology is approaching the man-machine and man-animal
boundaries. And with this, society may be leaping into humanity defining innovation without the equivalent of a constitutional convention to decide who should have the authority to decide whether, when, and how these innovations are released into society. What are the ethical ramifications? What checks and balances might be important?”

Her comments are right in line with my research and co-research into Humane Technologies. Liataud continues:

“Increasingly, the people and companies with the technological or scientific ability to create new products or innovations are de facto making policy decisions that affect human safety and society. But these decisions are often based on the creator’s intent for the product, and they don’t always take into account its potential risks and unforeseen uses. What if gene-editing is diverted for terrorist ends? What if human-pig chimeras mate? What if citizens prefer to see birds rather than flying cars when they look out a window? (Apparently, this is a real risk. Uber plans to offer flight-hailing apps by 2020.) What if Echo Look leads to mental health issues for teenagers? Who bears responsibility for the consequences?”

For me, the answer to that last question is all of us. We should not rely on business and industry to make these decisions, nor expect our government to do it. We have to become involved in these issues at the public level.

Michael Sandel believes that the public is hungry for these issues, but we tend to shy away from them. They can be confrontational and divisive, and no one wants to make waves or be politically incorrect. That’s a mistake.

An image from the future. A student design fiction project that examined ubiquitous AR.

So while the last thing I want is a politician or CEO making these decisions, these two constituencies could do the responsible thing and create forums for these discussions so that the public can weigh in on them. To do anything less, borders on arrogance.

Ultimately we will have to demand this level of thought, beginning with ourselves. This responsibility should start with anticipatory methodologies that examine the social, cultural and behavioral ramifications, and unintended consequences of what we create.

But we should not fight this alone. Corporations and governments concerned with appearing sensitive and proactive toward the environment and social justice need to add a new pillar to their edifice as responsible global citizens: humane technology.

 

Bookmark and Share

The algorithms.

 

I am not a mathematician. Not even close. My son is a bit of a wiz when it comes to math but not the kind of math you do in your head. His particular mathematical gift only works when he sees the equations. Still, I’d take that. Calculators give me fits. So the idea that I might decipher or write a functioning algorithm (the kind a computer could use) is tantamount to me turning water into wine.

Algorithms are all the buzz these days because they are the functioning math behind artificial intelligence (AI). How is this? I will turn to Merriam-Webster online.

“: a procedure for solving a mathematical problem (as of finding the greatest common divisor) in a finite number of steps that frequently involves repetition of an operation; broadly: a step-by-step procedure for solving a problem or accomplishing some end especially by a computer a search algorithm.”

I’ll throw away the first part of that definition because I don’t understand it. The second part is more my speed: a step-by-step procedure for solving a problem. I get that. As a designer, I do that all the time. Visiting the HowStuffWorks website is even better for explaining the purpose of algorithms. Essentially, it is a way for a computer to do something. Of course, there are, as in most problems, more than one way to get from point A to point B, so computer programmers choose the best algorithm for the task.

What does an algorithm look like? Think of a flow chart or a decision tree. When you turn that into code (the language of computers) then it might look like the image below.

Turning an algorithm into code.

You may already know all this, but I didn’t. Not really. I use the term algorithm all the time to describe the technology and process behind AI, but it always helps me to break these ideas down to their parts.

With all that out of the way, this week on the Futurism.com website, there was an article that discussed Ray Kurzweil’s theory that our brains contain a master algorithm inside our neocortex. It is that algorithm that enables us to handle pattern recognition and all the vastly complex nuance that our brains process every day. Referencing Kurzweil, Futurism stated that,

“… the brain’s neocortex — that part of the brain that’s responsible for intelligent behavior — consists of roughly 300 million modules that recognize patterns. These modules are self-organized into hierarchies that turn simple patterns into complex concepts. Despite neuroscience advancing by leaps and bounds over the years, we still haven’t quite figured out how the neocortex works.”

But, according to Kurzweil, “these multiple modules ‘all have the same algorithm,’”

Presumably, when we figure that out, we will be able to create an AI that thinks like a human, or better than a human. Hold that thought.

On another part of the web was a story from FastCoDesign that asked the question, “What’s The Next Great Art Movement? Ask This Neural Network.” FastCo interviewed Ahmed Elgammal a researcher at Rutgers University who it is getting AI (using algorithms) to create masterpieces after studying all the major art movements through history and how they evolve. His objective is to have the AI come up with the next major art movement. The art is, well, not good art. How do I know? I create art, I’ve studied art, and I’ve even sold art, so I know more about art than I do, say math. The art that Elgammal’s AI generates is intriguing, but it lacks that certain something that tells you it’s art. I think it might be a human thing. It is still something you can recognize.

So if you are still holding on to that earlier thought about algorithms and how we are working to perfect them, we could make the leap that a better functioning AI might fool us at some point and we wouldn’t be able to tell human art from the AI variety. There are a lot of people working on these types of things, and there are billions of dollars going toward the research.

Now I’m going to ask a stupid question. Why do we need an AI to tell us what the next movement in art is or should be? Are humans defective in this area? Couldn’t we just wait and see or are we just too impatient? Perhaps we have grown tired of creating art. If you know, please share.

Not to take anything away from Ray Kurzweil, but I guess I could ask the same question of AI. I assume that we could use AI that is so far above our thinking that it can help us solve problems better than we could on our own. But, if that AI is thinking so far beyond us, I’m not sure whether it would help us create better solutions or whether we would simply abdicate thinking to the AI. There’s a real danger of that you know. Maybe thinking is overrated.

The question keeps coming up. Do we make things to help us flourish or do we make things because we can?

Ray Kurzweil: There’s a Blueprint for the Master Algorithm in Our Brains

 

Bookmark and Share