Tag Archives: robots

Artificial intelligence isn’t really intelligence—yet. I hate to say I told you so.

 

Last week, we discovered that there is a new side to AI. And I don’t mean to gloat, but I saw this potential pitfall as fairly obvious. It is interesting that the real world event that triggered all the talk occurred within days of episode 159 of The Lightstream Chronicles. In my story, Keiji-T, a synthetic police investigator virtually indistinguishable from a human, questions the conclusions of an Artificial Intelligence engine called HAPP-E. The High Accuracy Perpetrator Profiling Engine is designed to assimilate all of the minutiae surrounding a criminal act and spit out a description of the perpetrator. In today’s society, profiling is a human endeavor and is especially useful in identifying difficult-to-catch offenders. Though the procedure is relatively new in the 21st century and goes by many different names, the American Psychological Association says,

“…these tactics share a common goal: to help investigators examine evidence from crime scenes and victim and witness reports to develop an offender description. The description can include psychological variables such as personality traits, psychopathologies and behavior patterns, as well as demographic variables such as age, race or geographic location. Investigators might use profiling to narrow down a field of suspects or figure out how to interrogate a suspect already in custody.”

This type of data is perfect for feeding into an AI, which uses neural networks and predictive algorithms to draw conclusions and recommend decisions. Of course, AI can do it in seconds whereas an FBI unit may take days, months, or even years. The way AI works, as I have reported many times before, is based on tremendous amounts of data. “With the advent of big data, the information going in only amplifies the veracity of the recommendations coming out.” In this way, machines can learn which is the whole idea behind autonomous vehicles making split-second decisions about what to do next based on billions of possibilities and only one right answer.

In my sci-fi episode mentioned above, Detective Guren describes a perpetrator produced by the AI known as HAPP-E . Keiji-T, forever the devil’s advocate, counters with this comment, “Data is just data. Someone who knows how a probability engine works could have adopted the characteristics necessary to produce this deduction.” In other words, if you know what the engine is trying to do, theoretically you could ‘teach’ the AI using false data to produce a false deduction.

Episode 159. It seems fairly obvious.
Episode 159. It seems fairly obvious.

I published Episode 159 on March 18, 2016. Then an interesting thing happened in the tech world. A few days later Microsoft launched an AI chatbot called Tay (a millennial nickname for Taylor) designed to have conversations with — millennials. The idea was that Tay would become as successful as their Chinese version named XiaoIce, which has been around for four years and engages millions of young Chinese in discussions of millennial angst with a chatbot. Tay used three platforms: Twitter, Kik and GroupMe.

Then something went wrong. In less than 24 hours, Tay went from tweeting that “humans are super cool” to full-blown Nazi. Soon after Tay launched, the super-sketchy enclaves of 4chan and 8chan decided to get malicious and manipulate the Tay engine feeding it racist and sexist invective. If you feed an AI enough garbage, it will ‘learn’ that garbage is the norm and begin to repeat it. Before Tay’s first day was over, Microsoft took it down, removed the offensive tweets and apologized.

Taygoeswild
Crazy talk.

Apparently, Microsoft, though it had considered that such a thing was possible, but decided not to use filters (conversations to avoid or canned answers to volatile subjects). Experts in the chatbot field were quick to criticize: “‘You absolutely do NOT let an algorithm mindlessly devour a whole bunch of data that you haven’t vetted even a little bit.’ In other words, Microsoft should have known better than to let Tay loose on the raw uncensored torrent of what Twitter could direct her way.”

The tech site, Arstechnica also probed the question of “…why Tay turned nasty when XiaoIce didn’t?” The assessment thus far is that China’s highly restrictive measures keep social media “ideologically appropriate”, and under control. The censors will close your account for bad behavior.

So, what did we learn from this? AI, at least as it exists today, has no understanding. It has no morals and or ethical behavior unless you give it some. Then next questions are: Who decides what is moral and ethical? Will it be the people (we saw what happened with that) or some other financial or political power? Maybe the problem is with the premise itself. What do you think?

Bookmark and Share

Robots will be able to do almost anything, including what you do.

There seems to be a lot of talk these days about what our working future may look like. A few weeks ago I wrote about some of Faith Popcorn’s predictions. Quoting from the Popcorn slide deck,

“Work, as we know it, is dying. Careers and offices: Over. The robots are marching in, taking more and more skilled jobs. To keep humans from becoming totally obsolete, the government must intervene, incentivizing companies to keep people on the payroll. Otherwise, robots would job-eliminate them. For the class of highly-trained elite works, however, things have never been better. Maneuvering from project to project, these free-agents thrive. Employers, eager to secure their human talent, lavish them with luxurious benefits and unprecedented flexibility.  The gap between the Have’s and Have-Nots has never been wider.”

Now, I consider Popcorn to be a marketing futurist, she’s in the business to help brands. There’s nothing wrong with that, and I agree with almost all of her predictions. But she’s not the only one talking about the future of work. In a recent New York Times Sunday Book Review (forwarded to me by a colleague) Rise Of The  Robots | Technology and the Threat of a Jobless Future, Martin Ford pretty much agrees. According to the review, “Tasks that would seem to require a distinctively human capacity for nuance are increasingly assigned to algorithms, like the ones currently being introduced to grade essays on college exams.” Increasingly devices, like 3D printers or drones can do work that used to require a full-blown manufacturing plant or what was heretofore simply impossible. Ford’s book goes on to chronicle dozens of instances like this. The reviewer, Barbara Ehrenreich, states, “In ‘Rise of the Robots,’ Ford argues that a society based on luxury consumption by a tiny elite is not economically viable. More to the point, it is not biologically viable. Humans, unlike robots, need food, health care and the sense of usefulness often supplied by jobs or other forms of work.”

In another article in Fast Company, Gwen Moran surveys a couple of PhD’s, one from MIT and another who’s executive director of the Society of Human Resource Management. The latter, Mark Schmit agrees that there will be a disparity in the work force. “this winner/loser scenario predicts a widening wealth gap, Schmit says. Workers will need to engage in lifelong education to remain on top of how job and career trends are shifting to remain viable in an ever-changing workplace, he says.” On the other end of the spectrum some see the future as more promising. The aforementioned MIT prof, Erik Brynjolfsson, “…thinks that technology has the potential for “shared prosperity,” giving us richer lives with more leisure time and freedom to do the types of work we like to do. But that’s going to require collaboration and a unified effort among developers, workers, governments, and other stakeholders…Machines could carry out tasks while programmed intelligence could act as our “digital agents” in the creation and sharing of products and knowledge.”

I’ve been re-accessing Stuart Candy’s PhD dissertation The Futures of Everyday Life, recently and he surfaces a great quote from science fiction writer Warren Ellis which itself was surfaced through Bruce Sterling’s State of the World Address at SXSW in 2006. It is,

“[T]here’s a middle distance between the complete collapse of infrastructure and some weird geek dream of electronically knowing where all your stuff is. Between apocalyptic politics and Nerd-vana, is the human dimension. How this stuff is taken on board, by smart people, at street level. … That’s where the story lies… in this spread of possible futures, and the people, on the ground, facing them. The story has to be about people trying to steer, or condemn other people, toward one future or another, using everything in their power. That’s a big story. “1

This is relevant for design, too, the topic of last week’s blog. It all ties into the future of the future, the stuff I research and blog about.  It’s about speculation and design fiction and other things on the fringes of our thinking. The problem is that I don’t think that enough people are thinking about it. I think it is still too fringe. What do people do after they read Mark Ford? Does it change anything? In a moment of original thinking I penned the following thought, and as is usually the case subsequently heard it stated in other words by other researchers:

If we could visit the future ”in person,” how would it affect us upon our return? How vigorously would we engage our redefined present?

It is why we need more design fiction and the kind that shakes us up in the process.

Comments welcome.

1 http://www.warrenellis.com

Bookmark and Share