Tag Archives: neural networks

Watching and listening.

 

Pay no attention to Alexa, she’s an AI.

There was a flurry of reports from dozens of news sources (including CNN) last week that an Amazon Echo, (Alexa), called the police during a New Mexico incident of domestic violence. The alleged call began a SWAT standoff, and the victim’s boyfriend was eventually arrested. Interesting story, but after a fact-check, that could not be what happened. Several sources including the New York Times and WIRED debunked the story with details on how Alexa calling 911 is technologically impossible, at least for now. And although the Bernalillo, New Mexico County Sheriff’s Department swears to it, according to WIRED,

“Someone called the police that day. It just wasn’t Alexa..”

Even Amazon agrees from a spokesperson email,

“The receiving end would also need to have an Echo device or the Alexa app connected to Wi-Fi or mobile data, and they would need to have Alexa calling/messaging set up,”1

So it didn’t happen, but most agree, while it may be technologically impossible today, it probably won’t be for very long. The provocative side of the WIRED article proposed this thought:

“The Bernalillo County incident almost certainly had nothing to do with Alexa. But it presents an opportunity to think about issues and abilities that will become real sooner than you might think.”

On the upside, some see benefits from the ability of Alexa to intervene in a domestic dispute that could turn lethal, but they fear something called “false positives.” Could an off handed comment prompt Alexa to make a call to the police? And if it did would you feel as though Alexa had overstepped her bounds?

Others see the potential in suicide prevention. Alexa could calm you down or make suggestions for ways to move beyond the urge to die.

But as we contemplate opening this door, we need to acknowledge that we’re letting these devices listen to us 24/7 and giving them the permission to make decisions on our behalf whether we want them to or not. The WIRED article also included a comment from Evan Selinger of RIT (whom I’ve quoted before).

“Cyberservants will exhibit mission creep over time. They’ll take on more and more functions. And they’ll habituate us to become increasingly comfortable with always-on environments listening to our intimate spaces.”

These technologies start out as warm and fuzzy (see the video below) but as they become part of our lives, they can change us and not always for the good. This idea is something I contemplated a couple of years ago with my Ubiquitous Surveillance future. In this case, the invasion was not as a listening device but with a camera (already part of Amazon’s Echo Look). You can check that out and do your own provocation by visiting the link.

I’m glad that there are people like Susan Liautaud (who I wrote about last week) and Evan Selinger who are thinking about the effects of technology on society, but I still fear most of us take the stance of Dan Reidenberg, who is also quoted in the WIRED piece.

“‘I don’t think we can avoid this. This is where it is going to go. It is really about us adapting to that,” he says.’”

 

Nonsense! That’s like getting in the car with a drunk driver and then doing your best to adapt. Nobody is putting a gun to your head to get into the car. There are decisions to be made here, and they don’t have to be made after the technology has created seemingly insurmountable problems or intrusions in our lives. The companies that make them should be having these discussions now, and we should be invited to share our opinions.

What do you think?

 

  1. http://wccftech.com/alexa-echo-calling-911/
Bookmark and Share

Ethical tech.

Though I tinge most of my blogs with ethical questions, the last time I brought up this topic specifically on this was back in 2015. I guess I am ready to give it another go. Ethics is a tough topic. If we deal with this purely superficially, ethics would seem natural, like common sense, or the right thing to do. But if that’s the case, why do so many people do the wrong thing? Things get even more complicated if we move into institutionally complex issues like banking, or governing, technology, genetics, health care or national defense, just to name a few.

The last time I wrote about this, I highlighted Michael Sandel Professor of Philosophy and Government at Harvard’s Law School, where he teaches a wildly popular course called “Justice.” Then, I was glad to see that the big questions were still being addressed in in places like Harvard. Some of his questions then, which came from a FastCo article, were:

“Is it right to take from the rich and give to the poor? Is it right to legislate personal safety? Can torture ever be justified? Should we try to live forever? Buy our way to the head of the line? Create perfect children?”

These are undoubtedly important and prescient questions to ask, especially as we are beginning to confront technologies that make things which were formerly inconceivable or plain impossible, not only possible but likely.

So I was pleased to see last month, an op-ed piece in WIRED by Susan Liautaud founder of The Ethics Incubator. Susan is about as closely aligned to my tech concerns as anyone I have read. And she brings solid thinking to the issues.

“Technology is approaching the man-machine and man-animal
boundaries. And with this, society may be leaping into humanity defining innovation without the equivalent of a constitutional convention to decide who should have the authority to decide whether, when, and how these innovations are released into society. What are the ethical ramifications? What checks and balances might be important?”

Her comments are right in line with my research and co-research into Humane Technologies. Liataud continues:

“Increasingly, the people and companies with the technological or scientific ability to create new products or innovations are de facto making policy decisions that affect human safety and society. But these decisions are often based on the creator’s intent for the product, and they don’t always take into account its potential risks and unforeseen uses. What if gene-editing is diverted for terrorist ends? What if human-pig chimeras mate? What if citizens prefer to see birds rather than flying cars when they look out a window? (Apparently, this is a real risk. Uber plans to offer flight-hailing apps by 2020.) What if Echo Look leads to mental health issues for teenagers? Who bears responsibility for the consequences?”

For me, the answer to that last question is all of us. We should not rely on business and industry to make these decisions, nor expect our government to do it. We have to become involved in these issues at the public level.

Michael Sandel believes that the public is hungry for these issues, but we tend to shy away from them. They can be confrontational and divisive, and no one wants to make waves or be politically incorrect. That’s a mistake.

An image from the future. A student design fiction project that examined ubiquitous AR.

So while the last thing I want is a politician or CEO making these decisions, these two constituencies could do the responsible thing and create forums for these discussions so that the public can weigh in on them. To do anything less, borders on arrogance.

Ultimately we will have to demand this level of thought, beginning with ourselves. This responsibility should start with anticipatory methodologies that examine the social, cultural and behavioral ramifications, and unintended consequences of what we create.

But we should not fight this alone. Corporations and governments concerned with appearing sensitive and proactive toward the environment and social justice need to add a new pillar to their edifice as responsible global citizens: humane technology.

 

Bookmark and Share

The algorithms.

 

I am not a mathematician. Not even close. My son is a bit of a wiz when it comes to math but not the kind of math you do in your head. His particular mathematical gift only works when he sees the equations. Still, I’d take that. Calculators give me fits. So the idea that I might decipher or write a functioning algorithm (the kind a computer could use) is tantamount to me turning water into wine.

Algorithms are all the buzz these days because they are the functioning math behind artificial intelligence (AI). How is this? I will turn to Merriam-Webster online.

“: a procedure for solving a mathematical problem (as of finding the greatest common divisor) in a finite number of steps that frequently involves repetition of an operation; broadly: a step-by-step procedure for solving a problem or accomplishing some end especially by a computer a search algorithm.”

I’ll throw away the first part of that definition because I don’t understand it. The second part is more my speed: a step-by-step procedure for solving a problem. I get that. As a designer, I do that all the time. Visiting the HowStuffWorks website is even better for explaining the purpose of algorithms. Essentially, it is a way for a computer to do something. Of course, there are, as in most problems, more than one way to get from point A to point B, so computer programmers choose the best algorithm for the task.

What does an algorithm look like? Think of a flow chart or a decision tree. When you turn that into code (the language of computers) then it might look like the image below.

Turning an algorithm into code.

You may already know all this, but I didn’t. Not really. I use the term algorithm all the time to describe the technology and process behind AI, but it always helps me to break these ideas down to their parts.

With all that out of the way, this week on the Futurism.com website, there was an article that discussed Ray Kurzweil’s theory that our brains contain a master algorithm inside our neocortex. It is that algorithm that enables us to handle pattern recognition and all the vastly complex nuance that our brains process every day. Referencing Kurzweil, Futurism stated that,

“… the brain’s neocortex — that part of the brain that’s responsible for intelligent behavior — consists of roughly 300 million modules that recognize patterns. These modules are self-organized into hierarchies that turn simple patterns into complex concepts. Despite neuroscience advancing by leaps and bounds over the years, we still haven’t quite figured out how the neocortex works.”

But, according to Kurzweil, “these multiple modules ‘all have the same algorithm,’”

Presumably, when we figure that out, we will be able to create an AI that thinks like a human, or better than a human. Hold that thought.

On another part of the web was a story from FastCoDesign that asked the question, “What’s The Next Great Art Movement? Ask This Neural Network.” FastCo interviewed Ahmed Elgammal a researcher at Rutgers University who it is getting AI (using algorithms) to create masterpieces after studying all the major art movements through history and how they evolve. His objective is to have the AI come up with the next major art movement. The art is, well, not good art. How do I know? I create art, I’ve studied art, and I’ve even sold art, so I know more about art than I do, say math. The art that Elgammal’s AI generates is intriguing, but it lacks that certain something that tells you it’s art. I think it might be a human thing. It is still something you can recognize.

So if you are still holding on to that earlier thought about algorithms and how we are working to perfect them, we could make the leap that a better functioning AI might fool us at some point and we wouldn’t be able to tell human art from the AI variety. There are a lot of people working on these types of things, and there are billions of dollars going toward the research.

Now I’m going to ask a stupid question. Why do we need an AI to tell us what the next movement in art is or should be? Are humans defective in this area? Couldn’t we just wait and see or are we just too impatient? Perhaps we have grown tired of creating art. If you know, please share.

Not to take anything away from Ray Kurzweil, but I guess I could ask the same question of AI. I assume that we could use AI that is so far above our thinking that it can help us solve problems better than we could on our own. But, if that AI is thinking so far beyond us, I’m not sure whether it would help us create better solutions or whether we would simply abdicate thinking to the AI. There’s a real danger of that you know. Maybe thinking is overrated.

The question keeps coming up. Do we make things to help us flourish or do we make things because we can?

Ray Kurzweil: There’s a Blueprint for the Master Algorithm in Our Brains

 

Bookmark and Share

Augmented evidence. It’s a logical trajectory.

A few weeks ago I gushed about how my students killed it at a recent guerrilla future enactment on a ubiquitous Augmented Reality (AR) future. Shortly after that, Mark Zuckerberg announced the Facebook AR platform. The AR uses the camera on your smartphone, and according to a recent WIRED article, transforms your smartphone into an AR engine.

Unfortunately, as we all know, (and so does Zuck), the smartphone isn’t currently much of an engine. AR requires a lot of processing, and so does the AI that allows it to recognize the real world so it can layer additional information on top of it. That’s why Facebook (and others), are building their own neural network chips so that the platform doesn’t have to run to the Cloud to access the processing required for Artificial Intelligence (AI). That will inevitably happen which will make the smartphone experience more seamless, but that’s just part the challenge for Facebook.

If you add to that the idea that we become even more dependent on looking at our phones while we are walking or worse, driving, (think Pokemon GO), then this latest announcement is, at best, foreshadowing.

As the WIRED article continues, tech writer Brian Barrett talked to Blair MacIntyre, from Georgia Tech who says,

“The phone has generally sucked for AR because holding it up and looking through it is tiring, awkward, inconvenient, and socially unacceptable,” says MacIntyre. Adding more of it doesn’t solve those issues. It exacerbates them. (The exception might be the social acceptability part; as MacIntyre notes, selfies were awkward until they weren’t.)”

That last part is an especially interesting point. I’ll have to come back to that in another post.

My students did considerable research on exactly this kind of early infancy that technologies undergo on their road to ubiquity. In another WIRED article, even Zuckerberg admitted,

“We all know where we want this to get eventually,” said Zuckerberg in his keynote. “We want glasses, or eventually contact lenses, that look and feel normal, but that let us overlay all kinds of information and digital objects on top of the real world.”

So there you have it. Glasses are the end game, but as my students agreed, contact lenses not so much. Think about it. If you didn’t have to stick a contact lens in your eyeball, you wouldn’t and the idea that they could become ubiquitous (even if you solved the problem of computing inside a wafer thin lens and the myriad of problems with heat, and in-eye-time), they are much farther away, if ever.

Student design team from Ohio State’s Collaborative Studio.

This is why I find my student’s solution so much more elegant and a far more logical trajectory. According to Barrett,

“The optimistic timeline for that sort of tech, though, stretches out to five or 10 years. In the meantime, then, an imperfect solution takes the stage.”

My students locked it down to seven years.

Finally, Zuckerberg made this statement:

“Augmented reality is going to help us mix the digital and physical in all new ways,” said Zuckerberg at F8. “And that’s going to make our physical reality better.”

Except that Zuck’s version of better and mine or yours may not be the same. Exactly what is wrong with reality anyway?

If you want to see the full-blown presentation of what my students produced, you can view it at aughumana.net.

Note: Currently the AugHumana experience is superior on Google Chrome.  If you are a Safari or Firefox purest, you may have to wait for the page to load (up to 2 minutes). We’re working on this. So, just use Chrome this time. We hope to have it fixed soon.

Bookmark and Share

The end of code.

 

This week WIRED Magazine released their June issue announcing the end of code. That would mean that the ability to write code, as is so cherished in the job world right now, is on the way out. They attribute this tectonic shift to Artificial Intelligence, machine learning, neural networks and the like. In the future (which is taking place now) we won’t have to write code to tell computers what to do, we will just have to teach them. I have been over this before through a number of previous writings. An example: Facebook uses a form of machine learning by collecting data from millions of pictures that are posted on the social network. When someone loads a group photo and identifies the people in the shot, Facebook’s AI remembers it by logging the prime coordinates on a human face and attributing them to that name (aka facial recognition). If the same coordinates show up again in another post, Facebook identifies it as you. People load the data (on a massive scale), and the machine learns. By naming the person or persons in the photo, you have taught the machine.

The WIRED article makes some interesting connections about the evolution of our thinking concerning the mind, about learning, and how we have taken a circular route in our reasoning. In essence, the mind was once considered a black box; there was no way to figure it out, but you could condition responses, a la Pavlov’s Dog. That logic changes with cognitive science which is the idea that the brain is more like a computer. The computing analogy caught on, and researchers began to see the whole idea of thought, memory, and thinking as stuff you could code, or hack, just like a computer. Indeed, it is this reasoning that has led to the notion that DNA is, in fact, codable, hence splicing through Crispr. If it’s all just code, we can make anything. That was the thinking. Now there is machine learning and neural networks. You still code, but only to set up the structure by which the “thing” learns, but after that, it’s on its own. The result is fractal and not always predictable. You can’t go back in and hack the way it is learning because it has started to generate a private math—and we can’t make sense of it. In other words, it is a black box. We have, in effect, stymied ourselves.

There is an upside. To train a computer you used to have to learn how to code. Now you just teach it by showing or giving it repetitive information, something anyone can do, though, at this point, some do it better than others.

Always the troubleshooter, I wonder what happens when we—mystified at a “conclusion” or decision arrived at by the machine—can’t figure out how to make it stop arriving at that conclusion. You can do the math.

Do we just turn it off?

Bookmark and Share