Tag Archives: Chatbot

Artificial intelligence isn’t really intelligence—yet. I hate to say I told you so.

 

Last week, we discovered that there is a new side to AI. And I don’t mean to gloat, but I saw this potential pitfall as fairly obvious. It is interesting that the real world event that triggered all the talk occurred within days of episode 159 of The Lightstream Chronicles. In my story, Keiji-T, a synthetic police investigator virtually indistinguishable from a human, questions the conclusions of an Artificial Intelligence engine called HAPP-E. The High Accuracy Perpetrator Profiling Engine is designed to assimilate all of the minutiae surrounding a criminal act and spit out a description of the perpetrator. In today’s society, profiling is a human endeavor and is especially useful in identifying difficult-to-catch offenders. Though the procedure is relatively new in the 21st century and goes by many different names, the American Psychological Association says,

“…these tactics share a common goal: to help investigators examine evidence from crime scenes and victim and witness reports to develop an offender description. The description can include psychological variables such as personality traits, psychopathologies and behavior patterns, as well as demographic variables such as age, race or geographic location. Investigators might use profiling to narrow down a field of suspects or figure out how to interrogate a suspect already in custody.”

This type of data is perfect for feeding into an AI, which uses neural networks and predictive algorithms to draw conclusions and recommend decisions. Of course, AI can do it in seconds whereas an FBI unit may take days, months, or even years. The way AI works, as I have reported many times before, is based on tremendous amounts of data. “With the advent of big data, the information going in only amplifies the veracity of the recommendations coming out.” In this way, machines can learn which is the whole idea behind autonomous vehicles making split-second decisions about what to do next based on billions of possibilities and only one right answer.

In my sci-fi episode mentioned above, Detective Guren describes a perpetrator produced by the AI known as HAPP-E . Keiji-T, forever the devil’s advocate, counters with this comment, “Data is just data. Someone who knows how a probability engine works could have adopted the characteristics necessary to produce this deduction.” In other words, if you know what the engine is trying to do, theoretically you could ‘teach’ the AI using false data to produce a false deduction.

Episode 159. It seems fairly obvious.
Episode 159. It seems fairly obvious.

I published Episode 159 on March 18, 2016. Then an interesting thing happened in the tech world. A few days later Microsoft launched an AI chatbot called Tay (a millennial nickname for Taylor) designed to have conversations with — millennials. The idea was that Tay would become as successful as their Chinese version named XiaoIce, which has been around for four years and engages millions of young Chinese in discussions of millennial angst with a chatbot. Tay used three platforms: Twitter, Kik and GroupMe.

Then something went wrong. In less than 24 hours, Tay went from tweeting that “humans are super cool” to full-blown Nazi. Soon after Tay launched, the super-sketchy enclaves of 4chan and 8chan decided to get malicious and manipulate the Tay engine feeding it racist and sexist invective. If you feed an AI enough garbage, it will ‘learn’ that garbage is the norm and begin to repeat it. Before Tay’s first day was over, Microsoft took it down, removed the offensive tweets and apologized.

Taygoeswild
Crazy talk.

Apparently, Microsoft, though it had considered that such a thing was possible, but decided not to use filters (conversations to avoid or canned answers to volatile subjects). Experts in the chatbot field were quick to criticize: “‘You absolutely do NOT let an algorithm mindlessly devour a whole bunch of data that you haven’t vetted even a little bit.’ In other words, Microsoft should have known better than to let Tay loose on the raw uncensored torrent of what Twitter could direct her way.”

The tech site, Arstechnica also probed the question of “…why Tay turned nasty when XiaoIce didn’t?” The assessment thus far is that China’s highly restrictive measures keep social media “ideologically appropriate”, and under control. The censors will close your account for bad behavior.

So, what did we learn from this? AI, at least as it exists today, has no understanding. It has no morals and or ethical behavior unless you give it some. Then next questions are: Who decides what is moral and ethical? Will it be the people (we saw what happened with that) or some other financial or political power? Maybe the problem is with the premise itself. What do you think?

Bookmark and Share

On better humans and bad bots.

News of breaking future technologies, the stuff at the crux of my research, accumulates as a daily occurrence, and this week is no different. Of note, Zoltan Istvan is (another) 2016 US presidential candidate, but this time for the Transhumanism party. Transhumanism, “(abbreviated as H+ or h+) is an international cultural and intellectual movement with an eventual goal of fundamentally transforming the human condition by developing and making widely available technologies to greatly enhance human intellectual, physical, and psychological capacities”. 1 For those of you who didn’t know. Living forever is job one for the “movement.” Mr. Istvan is not likely to be in the debates but you can follow him and the rest of H+ at humanity+. org. I’ll reserve comment on this.

On another front, for those who think that once we get this human thing down right, that technology will save us and mischief will cease, there is this item from WIRED magazine UK. A couple of researchers at Google (that’s OK you can trust them) have “created an artificial intelligence that developed its responses based on transcripts from an IT helpdesk chat service and a database of movie scripts.” This AI is called a chatbot. Chatbot are computer programs designed to talk to you. You can try one out here.

According to WIRED’s James Temperton,

“The [Google] system used a neural network — a collection of machines arranged to mimic a human brain — to create its unnervingly human responses. Such networks use a huge amount of computing power and Google’s latest research shows their potential for improving how machines hold conversations.”

Apparently, it was the addition of the movie scripts that made the bot more contentious. Consider this exchange on morality:

“Human: I really like our discussion on morality and ethics …

Machine: And how I’m not in the mood for a philosophical debate.

Human: What do you like to talk about?

Machine: Nothing.”

Fun with programming. All of this points to the old adage, “Junk in is junk out.” In The Lightstream Chronicles the future version of this mischief is called twisting. Basically you take a perfectly good, well-behaved, synthetic human and put in some junk. The change in programming is generally used to make these otherwise helpful synths do criminal things.

The logo says it all.

This tendency we have as human beings to twist good ideas into bad ones is nothing new, and today’s headlines are evidence of it. We print guns with 3D printers, we use drones to vandalize, cameras to spy, and computers to hack. Perhaps that is what Humanity+ has in mind: Make humanity more technologically advanced. More like a… machine, then reprogram the humanness (that just leads to no good) out. What could possibly go wrong with that?

 

1 https://en.wikipedia.org/wiki/Transhumanism
Bookmark and Share