Last week I tipped you off to Amy Webb, a voracious design futurist with tons of tidbits on the latest technologies that are affecting not only design but our everyday life. I saved a real whopper for today. I won’t go into her mention of CRISPR-Cas9 since I covered that a few months ago without Amy’s help, but here’s one that I found more than interesting.
Chinese genomic scientists have created some designer pigs. They are called ‘micro pigs’ and they are taking orders at $1,600 a pop for the little critters. It turns out that pigs are very close—genetically—to humans but the big fellow were cumbersome to study (and probably too expensive to feed) so the scientists bred a smaller version by turning of the growth gene in their DNA. Voilà: micropigs. Plus you can order
them in different colors (they can do that, too). Now, of course this is all to further research and all proceeds will go to more research to help fight disease in humans, at least until they sell the patent on micropigs to the highest bidder.
So now we have genetic engineering to make a micropig, fashion statement. Wait a minute. We could use genetic engineering for human fashion statements, too. After all, it’s a basic human right to be whatever color we want. Oh, no. We would never do that.
Next up is Googles’ new email respond feature coming soon to your gmail account.
A design foundations student recently asked my advice on a writing assignment, something that might be affected by or affect design in the future. I told him to look up predictive algorithms. I have long contended that logic alone indicates that predictive algorithms, taking existing data and applying constraints, can be used to solve a problem, answer a question, or design something. With the advent of big data, the information going in only amplifies the veracity of the recommendations coming out. In case you haven’t noticed, big data is, well, big.
One of the design practitioners that I follow is Amy Webb. Amy has been thinking about this longer than I have but clearly, we think alike, and we are looking at the same things. I don’t know if she is as alarmed as I am. We’ve never spoken. In her recent newsletter, her focus was on what else, predictive algorithms. Amy alerted me to a whole trove of new developments. There were so many that I have decided to make it a series of blogs starting with this one.
Keep in mind, that as I write this these technologies are in their infancy. If the already impress you, then the future will likely blow you away. The first was something known as, Project Dreamcatcher from Autodesk. These are the people who make Maya, and AutoCAD and much of the software that designers, animators, engineers and architects use every day. According to the website:
“The Dreamcatcher system allows designers to input specific design objectives, including functional requirements, material type, manufacturing method, performance criteria, and cost restrictions. Loaded with design requirements, the system then searches a procedurally synthesized design space to evaluate a vast number of generated designs for satisfying the design requirements. The resulting design alternatives are then presented back to the user, along with the performance data of each solution, in the context of the entire design solution space.”
Another on Amy’s list was Google’s recently announced RankBrain, Google’s next venture into context-aware platforms using advances in predictive algorithms to make what you see scarily tailored to who you are. According to Amy from a 2012 article (this is old news folks):
“With the adoption of the Siri application, iOS 5 mobile phones (Apple only) can now compare location, interests, intentions, schedule, friends, history, likes, dislikes and more to serve content and answers to questions.”
In other words, there’s lots more going on than you think when Siri answers a question for you. Well RankBrain takes this to the next level, according to Bloomberg who broke the story on RankBrain:
“For the past few months, a “very large fraction” of the millions of queries a second that people type into the company’s search engine have been interpreted by an artificial intelligence system, nicknamed RankBrain…’Machine learning is a core transformative way by which we are rethinking everything we are doing,’ said Google’s Chief Executive Officer Sundar Pichai on the company’s earnings call last week.”
By the way, so far, most AI predicts much more accurately than we do, humans that is.
If this is moving too fast for you, next week, thanks to Amy, I’ll highlight some applications of AI that will have you squirming.
Some people tell me that I am a pessimist when it comes to technology. Maybe, but part of my job is troubleshooting the future before the future requires troubleshooting. As I have said many times before, I think there are some amazing technologies out there that sound promising and exciting. One that caught my attention this week is the voice interface operating system. If you saw the film Her, then you know of that which I speak. For many Silicon Valley entrepreneurs, it has been the Holy Grail for some time. A recent WIRED magazine article by David Pierce highlights some of the advancements that are on the cusp of being part of our everyday lives.
Pierce tells how in 1979 during a visit to Xerox PARC, Steve Jobs was blown away by something called a graphic user interface (GUI). Instantly, Jobs knew that the point, click and drag interface was for the masses.
One of the scientists in that Xerox PARC group was a guy named Ron Kaplan who tells Pierce that, “‘The GUI has topped out,’ Kaplan says. ‘It’s so overloaded now.’”
I guess I can relate. Certainly it is a challenge to remember the obscure keyboard commands for every program that you use. One of my mainstays, Autodesk Maya, has so many keyboard options that there is a whole separate interface of hotkeys and menus accessed by (another) keyboard command. Rarely, except for the basics like cut, paste, and delete are these commands or menus the same between software.
If there were a voice interface that could navigate these for you, (perhaps only when you’re stumped), it would be a great addition. But the digital entrepreneurs racing in this direction, according to Pierce are going much further. They are looking, “to create the best voice-based artificial-intelligence assistant in the world.”
The article mentions one such app called Hound. It not only answers questions faster than Siri but with remarkably less overt information. For example, you could ask two different questions about two different places and then ask, “How many miles between those two?” It reads between the lines and fills in the gaps. If it could see, I’m guessing it could read a graphic novel and know what’s going on.
Apparently there are quite a few well-funded efforts racing in this direction. As Pierce says,
“It’s a classic story of technological convergence: Advances in processing power, speech recognition, mobile connectivity, cloud computing, and neural networks have all surged to a critical mass at roughly the same time. These tools are finally good enough, cheap enough, and accessible enough to make the conversational interface real—and ubiquitous.”
That’s just one of the reasons why I think Kurzweil is probably right in his Law of Accelerating Returns. (You can read about it on Kurzweil’s site of read a previous blog – one of many). Convergence is the way technology leaps forward. Supporting technologies enable formerly impossible things to become suddenly possible.
Pierce goes on to talk about a gadget called Alexa, which is now a device known as Amazon Echo, which uses something called Alexa Voice Service. The Echo is a, a black tube with flashing blue LEDs designed to sit in some central location in your space. There, it answers questions and assists in your everyday life. Pierce got to live with the beta version.
“In just the seven months between its initial beta launch and its public release in 2015, Alexa went from cute but infuriating to genuinely, consistently useful. I got to know it, and it got to know me… This gets at a deeper truth about conversational tech: You only discover its capabilities in the course of a personal relationship with it.”
Hence, part of developer’s challenge is making an engaging, likable, and maybe even charming assistant.
But Pierce closes the article with realization that such an agent is
“…only fully useful when it’s everywhere when it can get to know you in multiple contexts—learning your habits, your likes and dislikes, your routine and schedule. The way to get there is to have your AI colonize as many apps and devices as possible.”
So, this technology is coming and probably nearly here. It may well be remarkable and rewarding. I wouldn’t be doing my job, however if I didn’t ask about the emanating ripples and behaviors that will inevitably grow up around it. What will we give up? What will we lose before we realize it is gone? It is marvelous, but like it’s smart-phone cousin (or grandparent), it will change us. As we rush to embrace this, as we most likely will, we should think about this, too.
In episode 134, the Techman is paralyzed, lifted off the ground and thumped back to the floor. Whether it’s electrostatic, electromagnetic or superconductor electricity reduced to a hand-held device, the concept seems valid, especially 144 years from now. Part of my challenge is to make this design fiction logical by pulling threads of current research and technology to extrapolate possible futures. Mind you, it’s not a prediction, but a possibility. Here is my thinking:
Keiji’s weapon assumes that at least four technologies come together sometime in the next 14 decades. Safe bet? To start with the beam has to penetrate the door and significantly stun the subject. This idea is not that far-fetched. Weapons like this are already on the drawing board. For instance, the military is currently working on something called laser-guided directed-energy weapons. They work like “artificial lightning” to disable human targets. According to Defense Update,
“Laser-Induced Plasma Channel (LIPC) technology was developed by Ionatron to channel electrical energy through the air at the target. The interaction of the air and laser light at specific wavelength, causes light to break into filaments, which form a plasma channel that conducts the energy like a virtual wire. This technology can be adjusted for non-lethal or lethal use. “
The imaginative leap here is that the beam can penetrate the wall to find it’s target. Given the other advancements, I feel reasonably safe stretching on this one.
Next, you have to get the subject off the ground. Lifting a 200-pound human would require at least two technologies assisted by a third. First is a levitating superconductor. A levitating superconductor uses electric current from a superconductor to produce magnetic forces that could counter the force of gravity. According to physics.org:
“Like frogs, humans are about two-thirds water, so if you had a big enough Bitter electromagnet, there’s no reason why a human couldn’t be levitated diamagnetically. None of the frogs that have taken part in the diamagnetic levitation experiments have experienced any adverse effects, which bodes well for any future human guinea pigs.”
The other ingredient is a highly powerful magnet. If we had a superconductor with a few decades of refinement and miniaturization, it’s conceivable that it could produce magnetic forces counter to the force of gravity. 1
The final component would be the power source small enough to fit inside the weapon and carrying enough juice to generate the plasma, and magnetic field for at least fifteen seconds. Today, you can buy a million-volt stun device on Amazon.com for around $50 and thyristor semiconductor technology could help ramp up the power surge necessary to sustain the arc. Obviously, I’m not an engineer, but if you are, please feel free to chime in.
I’ll be the first to acknowledge that my blog last week was a bit depressing. However, if I thought, the situation was hopeless, I wouldn’t be doing this in the first place. I believe we have to acknowledge our uncanny ability to foul things up and, as best we can, design the gates and barriers into new technology to help prevent its abuse. And even though it may seem that way sometimes, I am not a technology pessimist or purely dystopian futurist. In truth, I’m tremendously excited about a plethora of new technologies and what they promise for the future.
2. see the future
Also last week (by way of asiaone.com) Dr. Michio Kaku spoke in Singapore served up this future within the next 50 years.
“Imagine buying things just by blinking. Imagine doctors making an artificial heart for you within 20 hours. Imagine a world where garbage costs more than computer chips.”
Personally, I believe he’s too conservative. I see it happening much sooner. Kaku is a one of a handful of famous futurists, and his “predictions” have a lot of science behind them. So who am I to argue with him? He’s a brilliant scientist, prolific author, and educator. Most futurists or forecasters will be the first to tell you that their futures are not predictions but rather possible futures. According to forecaster Paul Saffo, “The goal of forecasting is not to predict the future but to tell you what you need to know to take meaningful action in the present.”1
According to Saffo “… little is certain, nothing is preordained, and what we do in the present affects how events unfold, often in significant, unexpected ways.”
Though my work is design fiction, I agree with Saffo. We both look at the future the same way. The objective behind my fictions is to jar us into thinking about the future so that it doesn’t surprise us. The more that our global citizenry thinks about the future and how it may impact them, the more likely that they will get involved. At least that is my hope. Hence, it is why I look for design fictions that will break out of the academy or the gallery show and seep into popular culture. The future needs to be an inclusive conversation.
Of course, the future is a broad topic: it impacts everything and everyone. So much of what we take for granted today could be entirely different—possibly even unrecognizable—tomorrow. Food, medicine, commerce, communication, privacy, security, entertainment, transportation, education, and jobs are just a few of the enormously important areas for potentially radical change. Saffo and Kaku don’t know what the future will bring any more than I do. We just look at what it couldbring. I tend to approach it from the perspective of “What could go wrong?” Others take a more balanced view, and some look only at the positives. It is these perspectives that create the dialog and debate, which is what they are supposed to do. We also have to be careful that we don’t see these opinions as fact. Ray Kurzweil sees the equivalent of 20,000 years of change packed into the 21st century. Kaku (from the article mentioned above) sees computers being relegated to the
“‘dull, dangerous and dirty’ jobs that are repetitive, such as punching in data, assembling cars and any activity involving middlemen who do not contribute insights, analyses or gossip.’ To be employable, he stresses, you now have to excel in two areas: common sense and pattern recognition. Professionals such as doctors, lawyers and engineers who make value judgments will continue to thrive, as will gardeners, policemen, construction workers and garbage collectors.”
Looks like Michio and I disagree again. The whole idea behind artificial intelligence is in the area of predictive algorithms that use big data to learn. Machine learning programs detect patterns in data and adjust program actions accordingly.2 The idea of diagnosing illnesses, advising humans on potential human behaviors, analyzing soil, site conditions and limitations, or even collecting trash are will within the realm of artificial intelligence. I see these jobs every bit as vulnerable as those of assembly line workers.
That, of course, is all part of the discussion—that we need to have.
1 Harvard Business Review | July–August 2007 | hbr.org
I promised a drone update this week, but by now, it is probably already old news. It is a safe bet there are probably a few thousand more drones than last week. Hence, I’m going to shift to a topic that I think is moving even faster than our clogged airspace.
And now for an AI update. I’ve blogged previously about Kurzweil’s Law of Accelerating Returns, but the evidence is mounting every day that he’s probably right. The rate at which artificial intelligence is advancing is beginning to match nicely with his curve. A recent article on the Txchnologist website demonstrates how an AI system called Kulitta, is composing jazz, classical, new age and eclectic mixes that are difficult to tell from human compositions. You can listen to an example here. Not bad actually. Sophisticated AI creations like this underscore the realization that we can no longer think of robotics as the clunky mechanized brutes. AI can create. Even though it’s studying an archive of man-made creations the resulting work is unique.
First it learns from a corpus of existing compositions. Then it generates an abstract musical structure. Next it populates this structure with chords. Finally, it massages the structure and notes into a specific musical framework. In just a few seconds, out pops a musical piece that nobody has ever heard before.
The creator of Kulitta, Donya Quick says that this will not put composers out of a job, it will help them do their job better. She doesn’t say how exactly.
If even trained ears can’t always tell the difference, what does that mean for the masses? When we can load the “universal composer” app onto our phone and have a symphony written for ourselves, how will this serve the interests of musicians and authors?
The article continues:
Kulitta joins a growing list of programs that can produce artistic works. Such projects have reached a critical mass–last month Dartmouth College computational scientists announced they would hold a series of contests. They have put a call out seeking artificial intelligence algorithms that produce “human-quality” short stories, sonnets and dance music. These will be pitted against compositions made by humans to see if people can tell the difference.
The larger question to me is this: “When it all sounds wonderful or reads like poetry, will it make any difference to us who created it?”
Sadly, I think not. The sweat and blood that composers and artists pour into their compositions could be a thing of the past. If we see this in the fine arts, then it seems an inevitable consequence for design as well. Once the AI learns the characters, behaviors and personalities of the characters in The Lightstream Chronicles, it can create new episodes without me. Taking characters and setting that already exist as CG constructs, it’s not a stretch that it will be able to generate the wireframes, render the images, and layout the panels.
Would this app help me in my work? It could probably do it in a fraction of the time that it would take me, but could I honestly say it’s mine?
When art and music are all so easily reconstructed and perfect, I wonder if we will miss the flaw. Will we miss that human scratch on the surface of perfection, the thing that reminds us that we are human?
There is probably an algorithm for that, too. Just go to settings > humanness and use the slider.
Last week I talked about how the South Koreans have developed a 50 caliber toting, nearly autonomous weapon system and have sold a few dozen around the world. This week I feel obligated to finish up on my promise of the drone with a pistol. I discovered this from a WIRED article. It was a little tongue-in-cheek piece that analyzed a YouTube video and concluded that pistol-packing drone is probably real. I can’t think of anyone who doesn’t believe that this is a really bad idea, including the author of the piece. Nevertheless, if we were to make a list of unintended consequences of DIY drone technology, (just some simple brainstorming) the list, after a few minutes, would be a long one.
This week FastCo reported that NASA held a little get-together with about 1,000 invited guests from the drone industry to talk about a plan to manage the traffic when, as the agency believes, “every home will have a drone, and every home will serve as an airport at some point in the future”. NASA’s plan takes things slowly. Still the agency predicts that we will be able to get our packages from Amazon and borrow a cup of sugar from Aunt Gladys down the street, even in populated areas, by 2019.
Someone taking action is good news as we work to fix another poorly conceived technology that quickly went rogue. Unfortunately, it does nothing about the guy who wants to shoot down the Amazon drone for sport (or anyone/anything else for that matter).
On the topic of bad ideas, this week The Future Of Life Institute, a research organization out of Boston issued an open letter warning the world that autonomous weapons powered by artificial intelligence (AI) were imminent. The reasonable concern here is that a computer will do the kill-or-not-kill, bomb-or-not-bomb thinking, without the human fail-safe. Here’s an excerpt from the letter:
“Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.” [Emphasis mine.]
The letter is short. You should read it. For once we have and example of those smart people I alluded to last week, the ones with compassion and vision. For virtually every “promising” new technology—from the seemingly good to the undeniably dangerous) we need people who can foresee the unintended consequences of one-sided promises. Designers, scientists, and engineers are prime candidates to look into the future and wave these red flags. Then the rest of the world needs to pay attention.
Once again, however, the technology is here and whether it is legal or illegal, banned or not banned the cat is out of the bag. It is kind of like a nuclear explosion. Some things you just can’t take back.
Talk of robot takeovers is all the rage right now.
I’m good with this because the evidence is out there that robots will continue to get smarter and smarter but the human condition, being what it is, we will continue to do stupid s**t. Here are some examples from the news this week.
1. The BBC reported this week that South Korea has deployed something called The Super aEgis II, a 50-caliber robotic machine gun that knows who is an enemy and who isn’t. At least that’s the plan. The company that built and sells the Super aEgis is DoDAAM. Maybe that is short for do damage. The BBC astutely notes,
“Science fiction writer Isaac Asimov’s First Law of Robotics, that ‘a robot may not injure a human being or, through inaction, allow a human being to come to harm’, looks like it will soon be broken.”
Asimov was more than a great science-fiction writer, he was a Class A futurist. He clearly saw the potential for us to create robots that were smarter and more powerful than we are. He figured there should be some rules. Asimov used the kind of foresight that responsible scientists, technologists and designers should be using for everything we create. As the article continues, Simon Parkin of the BBC quotes Yangchan Song, DoDAAM’s managing director of strategy planning.
“Automated weapons will be the future. We were right. The evolution has been quick. We’ve already moved from remote control combat devices, to what we are approaching now: smart devices that are able to make their own decisions.”
Or in the words of songwriter Donald Fagen,
“A just machine to make big decisions
Programmed by fellows with compassion and vision…1”
Relax. The world is full of these fellows. Right now the weapon/robot is linked to a human who gives the OK to fire, and all customers who purchased the 30 units thus far have opted for the human/robot interface. But the company admits,
“If someone came to us wanting a turret that did not have the current safeguards we would, of course, advise them otherwise, and highlight the potential issues,” says Park. “But they will ultimately decide what they want. And we develop to customer specification.”
They are currently working on the technology that will help their machine make the right decision on its own., but the article cites several academics and researchers who see red flags waving. Most concur that teaching a robot right from wrong is no easy task. Compound the complexity because the fellows who are doing the programming don’t always agree on these issues.
Last week I wrote about Google’s self-driving car. Of course, this robot has to make tough decisions too. It may one day have to decide whether to hit the suddenly appearing baby carriage, the kid on the bike, or just crash the vehicle. In fact, Parkin’s article brings Google into the picture as well, quoting Colin Allen,
“Google admits that one of the hardest problems for their programming is how an automated car should behave at a four-way stop sign…”
Humans don’t do such a good job at that either. And there is my problem with all of this. If the humans who are programming these machines are still wrestling with what is ethically right or wrong, can a robot be expected to do better. Some think so. Over at DoDAMM,
“Ultimately, we would probably like a machine with a very sound basis to be able to learn for itself, and maybe even exceed our abilities to reason morally.”
Based on what?
Next week: Drones with pistols.
1. Donald Fagen, IGY From the Night Fly album. 1982
Throughout the course of the week, usually on a daily basis, I collect articles, news blurbs and what I call “signs from the future.” Mostly they fall into categories such as design fiction, technology, society, future, theology, and philosophy. I use this content sometimes for this blog, possibly for a lecture but most often for additional research as part of scholarly papers and presentations that are a matter of course as a professor. I have to weigh what goes into the blog because most of these topics could easily become full-blown papers. Of course, the thing with scholarly writing is that most publications demand exclusivity on publishing your ideas. Essentially, that means that it becomes difficult to repurpose anything I write here for something with more gravitas. One of the subjects that are of growing interest to me is Google. Not the search engine, per se, rather the technological mega-corp. It has the potential to be just such a paper, so even though there is a lot to say, I’m going to land on only a few key points.
A ubiquitous giant in the world of the Internet, Google has some of the most powerful algorithms, stores your most personal information, and is working on many of the most advanced technologies in the world. They try very hard to be soft-spoken, and low-key, but it belies their enormous power.
Most of us would agree that technology has provided some marvelous benefits to society especially in the realms of medicine, safety, education and other socially beneficial applications. Things like artificial knees, cochlear implants, air bags (when they don’t accidentally kill you), and instantaneous access to the world’s libraries have made life-changing improvements. Needless to say, especially if you have read my blog for any amount of time, technology also can have a downside. We may see greater yields from our agricultural efforts, but technological advancements also pump needless hormones into the populace, create sketchy GMO foodstuffs and manipulate farmers into planting them. We all know the problems associated with automobile emissions, atomic energy, chemotherapy and texting while driving. These problems are the obvious stuff. What is perhaps more sinister are the technologies we adopt that work quietly in the background to change us. Most of them we are unaware of until, one day, we are almost surprised to see how we have changed, and maybe we don’t like it. Google strikes me as a potential contributor in this latter arena. A recent article from The Guardian, entitled “Where is Google Taking Us?” looks at some of their most altruistic technologies (the ones they allowed the author to see). The author, Tim Adams, brought forward some interesting quotes from key players at Google. When discussing how Google would spend some $62 million in cash that it had amassed, Larry Page, one of the company’s co-founders asked,
“How do we use all these resources… and have a much more positive impact on the world?”
There’s nothing wrong with that question. It’s the kind of question that you would want a billionaire asking. My question is, “What does positive mean, and who decides what is and what isn’t?” In this case, it’s Google. The next quote comes from Sundar Pichai. With so many possibilities that this kind of wealth affords, Adams asked how they stay focused on what to do next.
“’Focus on the user and all else follows…We call it the toothbrush test,’ Pichai says, ‘we want to concentrate our efforts on things that billions of people use on a daily basis.’”
The statement sounds like savvy marketing. He is also talking about the most innate aspects of our everyday behavior. And so that I don’t turn this into an academic paper, here is one more quote. This time the author is talking to Dmitri Dolgov, principal engineer for Google Self-Driving Cars. For the whole idea to work, that is, the car reacting like a human would, only better, it has to think.
“Our maps have information stored and as the car drives around it builds up another local map with its sensors and aligns one to the other – that gives us a location accuracy of a centimetre or two. Beyond that, we are making huge numbers of probabilistic calculations every second.”
It’s the last line that we might want to ponder. Predictive algorithms are what artificial intelligence is all about, the kind of technology that plugs-in to a whole host of different applications from predicting your behavior to your abilities. If we don’t want to have to remember to check the oil, there is a light that reminds us. If we don’t want to have to remember somebody’s name, there is a facial recognition algorithm to remember for us. If my wearable detects that I am stressed, it can remind me to take a deep breath. If I am out for a walk, maybe something should mention all the things I could buy while I’m out (as well as what I am out of).
Here’s what I think about. It seems to me that we are amassing two lists: the things we don’t want to think about, and the things we do. Folks like Google are adding things to Column A, and it seems to be getting longer all the time. My concern is whether we will have anything left in Column B.
News of breaking future technologies, the stuff at the crux of my research, accumulates as a daily occurrence, and this week is no different. Of note, Zoltan Istvan is (another) 2016 US presidential candidate, but this time for the Transhumanism party. Transhumanism, “(abbreviated as H+ or h+) is an international cultural and intellectual movement with an eventual goal of fundamentally transforming the human condition by developing and making widely available technologies to greatly enhance human intellectual, physical, and psychological capacities”. 1 For those of you who didn’t know. Living forever is job one for the “movement.” Mr. Istvan is not likely to be in the debates but you can follow him and the rest of H+ at humanity+. org. I’ll reserve comment on this.
On another front, for those who think that once we get this human thing down right, that technology will save us and mischief will cease, there is this item from WIRED magazine UK. A couple of researchers at Google (that’s OK you can trust them) have “created an artificial intelligence that developed its responses based on transcripts from an IT helpdesk chat service and a database of movie scripts.” This AI is called a chatbot. Chatbot are computer programs designed to talk to you. You can try one out here.
According to WIRED’s James Temperton,
“The [Google] system used a neural network — a collection of machines arranged to mimic a human brain — to create its unnervingly human responses. Such networks use a huge amount of computing power and Google’s latest research shows their potential for improving how machines hold conversations.”
Apparently, it was the addition of the movie scripts that made the bot more contentious. Consider this exchange on morality:
“Human: I really like our discussion on morality and ethics …
Machine: And how I’m not in the mood for a philosophical debate.
Human: What do you like to talk about?
Fun with programming. All of this points to the old adage, “Junk in is junk out.” In The Lightstream Chroniclesthe future version of this mischief is called twisting. Basically you take a perfectly good, well-behaved, synthetic human and put in some junk. The change in programming is generally used to make these otherwise helpful synths do criminal things.
This tendency we have as human beings to twist good ideas into bad ones is nothing new, and today’s headlines are evidence of it. We print guns with 3D printers, we use drones to vandalize, cameras to spy, and computers to hack. Perhaps that is what Humanity+ has in mind: Make humanity more technologically advanced. More like a… machine, then reprogram the humanness (that just leads to no good) out. What could possibly go wrong with that?