The genius panel has some serious concerns.

Occasionally in preparing this blog, there are troughs in the technology newsfeed. But not now, and maybe never again. So it is with technology that accelerates exponentially. This idea, by the way, is a concept of which I will no longer try to convince my readers. I’m going to stop referencing why Kurzweil’s theorem, that technology advances exponentially is no longer a theorem and just move forward with the assumption that you know that it is. If you don’t agree,  then scout backwards—probably six months of previous blogs—and you’ll be on the same page. From here on, technology advances exponentially! With that being said, we are also no longer at the base of the exponential curve. We are beginning a steep climb.

Last week I highlighted Kurzweil’s upgraded prediction on the Singularity (12 years). I agree, though now I think he may be underselling things. It could easily arrive before that.

Today’s blog comes from a hot tip from one of my students. At the beginning of each semester, I always turn my students on to the idea of GoogleAlerts. It works like this: You tell Google to send you anything and everything on whatever topic interests you. Then, anytime there is news online that fits your topic, you get an email with a list of links from Google. The emails can be inundating so choose your search wisely. At any rate, my student who drank the GoogleAlert kool-aid sent me a link to a panel discussion that took place in January of 2017. The panel convened at something called Beneficial AI 2017 in Asilomar, California. And what a panel it was. Get this: Bart Selman (Cornell), David Chalmers (NYU), Elon Musk (Tesla, SpaceX), Jaan Tallinn (CSER/FLI), Nick Bostrom (FHI), Ray Kurzweil (Google), Stuart Russell (Berkeley), Sam Harris, Demis Hassabis (DeepMind). Sam is a philosopher, author, neuroscientist and noted secularist. I’ve cited nearly all of these characters before in blogs or research papers, so to see them all on one panel was, for me, amazing.

L to R: Elon Musk, Stuart Russell , Bart Selman, Ray Kurzweil, David Chalmers, Nick Bostrom, Demis Hassabis, Sam Harris, Jaan Tallinn.


Why were they there? The Future of Life Institute (FLI) organized the BAI 2017 event:

“In our sequel to the 2015 Puerto Rico AI conference, we brought together an amazing group of AI researchers from academia and industry, and thought leaders in economics, law, ethics, and philosophy for five days dedicated to beneficial AI.”

FLI works together with CSER. (The Centre for the Study of Existential Risk). I confess that I was not aware of either organization, but this is encouraging. For example, CSER’s mission is stated as

“[…]within the University of Cambridge dedicated to the study and mitigation of human extinction-level risks that may emerge from technological advances and human activity.”

FLI describes themselves thus:

“We are a charity and outreach organization working to ensure that tomorrow’s most powerful technologies are beneficial for humanity […] We are currently focusing on keeping artificial intelligence beneficial and we are also exploring ways of reducing risks from nuclear weapons and biotechnology.”

Both organizations are loaded with scientists and technologists including Steven Hawking, Bostrom, and Musk.

The panel of genius’ got off to a rocky start because there weren’t enough microphones to go around. Duh. But then things got interesting. The topic of safe AI or what these fellows refer to as AGI, Artificial General Intelligence, is a deep well fraught with promise and doom. The encouraging thing is that these organizations realize the potential for either, the discomforting thing is that they’re genuinely concerned.

As I have discussed before, this race to a superintelligence which Kurzweil moved up to 2029 a few weeks ago, is moving full speed ahead and it is climbing in a steep exponential incline. It is likely that we will be able to build it long before we have figured out how to keep it from destroying us. I’m on record as saying that even the notion of a superintelligence is an error in judgment. If what you want to do is cure disease, aging, and save the planet, why not stop short of full-tilt superintelligence. Surely you get a very, very, very intelligent AI to give you what you want and go no further. After hearing the panel discussion, however, I see this as naive. As Kurzweil stated in the discussion,

“…there really isn’t a foolproof technical solution to this… If you have an AI that is more intelligent than you and is out for your destruction, it’s out for the world’s destruction, and there is no other AI that is superior to it, that’s a bad situation. So that’s the specter […] Imagine that we’ve done our job perfectly, and we’ve created the most safe, beneficial AI possible, but we’ve let the political system become totalitarian and evil, either an evil world government or just a portion of the globe, that is that way, it’s not going to work out well. So part of the struggle is in the area of politics and policy to have the world reflect the values we want to achieve. Human AI is by definition at human levels and therefore is human. So the issue is, ‘How do we make humans ethical?’ is the same issue as, ‘How we make AIs that are at human level, ethical?’”

So there we have the problem of human nature, again. If we can’t fix ourselves if we can’t even agree on what’s broken, how can we build a benevolent god? Fortunately, brilliant minds are honestly concerned about this but that doesn’t mean they’re going to put on the brakes. It was stated in full agreement by the panel: a superintelligence is inevitable. If we don’t build it, someone else will.

It is also safe to assume that our super ethical AI won’t have the same ethics as someone else’s AI. Hence, Kurzweil’s specter. I could turn this into an essay, but I’ll stop here for now. What do you think?

Bookmark and Share

But nobody knows what better is.

South by Southwest, otherwise known as SXSW calls itself a film and music festival and interactive media conference. It’s held every spring in Austin, Texas. Other than maybe the Las Vegas Consumer Electronics Show or San Diego’s ComicCon, I can’t think of many conferences that generate as much buzz as SXSW. This year it is no different. I will have blog fodder for weeks. Though I can’t speak to the film or music side, I’m sure they were scintillating. Under the category of interactive, most of the buzz is about technology in general, as tech gurus and futurists are always in attendance along with celebs who align themselves to the future.

Once again at SXSW, Ray Kurzweil was on stage. In my blogs, Kurzweil is probably the one guy I quote the most throughout this blog. So here we go again. Two tech sites caught my eye they week, reporting on Kurzweil’s latest prediction that moves up the date of the Singularity from 2045 to 2029; that’s 12 years away. Since we are enmeshed in the world of exponentially accelerating technology, I have encouraged my students to start wrapping their heads around the idea of exponential growth. In our most recent project, it was a struggle just to embrace the idea of how in only seven years we could see transformational change. If Kurzweil is right about his latest prognostication, then 12 years could be a real stunner. In case you are visiting this blog for the first time, the Singularity to which Kurzweil refers is, acknowledged as the point at which computer intelligence exceeds that of human intelligence; it will know more, anticipate more, and analyze more than any human capability. Nick Bostrom calls it the last invention we will ever need to make. We’ve already seen this to some extent with IBM’s Watson beating the pants off a couple of Jeopardy masters and Google’s DeepMind handily beat a Go genius at a game that most thought to be too complex for a computer to handle. Some refer to this “computer” as a superintelligence, and warn that we better be designing the braking mechanism in tandem with the engine, or this smarter-than-us computer may outsmart us in unfortunate ways.

In an article in Scientific American, Northwestern University psychology professor Paul Weber says we are bombarded each day with about 2.5 exabytes of data and that the human brain can only store an estimated 2.5 petabytes (a million gigabytes). Of course, the bombardment will continue to increase. Another voice that emerges in this discussion is Rob High IBM’s vice president and chief technology officer. According to the futurism tech blog, High was part of a panel discussion at the American Institute of Aeronautics and Astronautics (AIAA) SciTech Conference 2017. High said,

“…we have a very desperate need for cognitive computing…The information being produced is far surpassing our ability to consume and make use of…”

On the surface, this seems like a compelling argument for faster, more pervasive computing. But since it is my mission to question otherwise compelling arguments, I want to ask whether we actually need to process 2.5 exabytes of information? It would appear that our existing technology has already turned on the firehose of data (Did we give it permission?) and now it’s up to us to find a way to drink from the firehose. To me, it sounds like we need a regulator, not a bigger gullet. I have observed that the traditional argument in favor of more, better, faster often comes wrapped in the package of help for humankind.

Rob High, again from the futurism article, says,

“‘If you’re a doctor and you’re trying to figure out the best way to treat your patient, you don’t have the time to go read the latest literature and apply that knowledge to that decision’ High explained. ‘In any scenario, we can’t possibly find and remember everything.’ This is all good news, according to High. We need AI systems that can assist us in what we do, particularly in processing all the information we are exposed to on a regular basis — data that’s bound to even grow exponentially in the next couple of years.’”

From another futurism article, Kurzweil uses a similar logic:

“We’re going to be able to meet the physical needs of all humans. We’re going to expand our minds and exemplify these artistic qualities that we value.”

The other rationale that almost always becomes coupled with expanding our minds is that we will be “better.” No one, however, defines what better is. You could be a better jerk. You could be a better rapist or terrorist or megalomaniac. What are we missing exactly, that we have to be smarter, or that Bach, or Mozart are suddenly inferior? Is our quality of life that impoverished? And for those who are impoverished, how does this help them? And what about making us smarter? Smarter at what?

But not all is lost. On a more positive note, futurism in a third article (they were busy this week), reports,

“The K&L Gates Endowment for Ethics and Computational Technologies seeks to introduce the thoughtful discussion on the use of AI in society. It is being established through funding worth $10 million from K&L Gates, one of the United States’ largest law firms, and the money will be used to hire new faculty chairs as well as support three new doctoral students.”

Though I’m not sure whether we can consider this a regulator, rather something to lessen the pain of swallowing.

Finally (for this week), back to Rob High,

“Smartphones are just the tip of the iceberg,” High said. “Human intelligence has its limitations and artificial intelligence is going to evolve in a lot of ways that won’t be similar to human intelligence. But, I think they will work best in the presence of humans.”

So, I’m more concerned with when artificial intelligence is not working at its best.

Bookmark and Share

A guerrilla future realized.

This week my brilliant students in Collaborative Studio 4650 provided a real word guerrilla future for the Humane Technologies: Livable Futures Pop-Up Collaboration at The Ohio State University. The design fiction was replete with diegetic prototypes and a video enactment. Out goal was to present a believable future in 2024 when ubiquitous AR glasses are the part of our mundane everyday. We made the presentation in Sullivant Hall’s Barnett Theater, and each member of the team had a set of mock AR glasses. The audience consisted of about 50 students ranging from the humanities to business. It was an amazing experience. It has untold riches for my design fiction research, but there were also a lot of revelations about how we experience, and enfold technology. After the presentation, we pulled out the white paper and markers and divided up into groups for a more detailed deconstruction of what transpired. While I have not plowed through all the scrolls that resulted from the post-presentation discussion groups, it seems universal that we can recognize how technology is apt to modify our behavior. It is also interesting to see that most of us have no clue how to resist these changes. Julian Oliver wrote in his (2011) The Critical Engineering Manifesto,

“5. The Critical Engineer recognises that each work of engineering engineers its user, proportional to that user’s dependency upon it.”

The idea of being engineered by our technology was evident throughout the AugHumana presentation video, and in discussions, we quickly identified the ways in which our current technological devices engineer us. At the same time, we feel more or less powerless to change or effect that phenomenon. Indeed, we have come to accept these small, incremental, seemingly mundane, changes to our behavior as innocent or adaptive in a positive way. En masse, they are neither. Kurzweil stated that,

‘We are not going to reach the Singularity in some single great leap forward, but rather through a great many small steps, each seemingly benign and modest in scope.’

History has shown that these steps are incrementally embraced by society and often give way to systems with a life of their own. An idea raised in one discussion group was labeled as effective dissent, but it seems almost obvious that unless we anticipate these imminent behavioral changes, by the time we notice them it is already too late, either because the technology is already ubiquitous or our habits and procedures solidly support that behavior.

There are ties here to material culture and the philosophy of technology that merits more research, but the propensity for technology to affect behavior in an inhumane way is powerful. These are early reflections, no doubt to be continued.

Bookmark and Share

Disruption. Part 2.


Last week I discussed the idea of technological disruption. Essentially, they are innovations that make fundamental changes in the way we work or live. In turn, these changes affect culture and behavior. Issues of design and culture are the stuff that interests me and my research: how easily and quickly our practices change as a result of the way we enfold technology. The advent of the railroad, mass produced automobiles, radio, then television, the Internet, and the smartphone all qualify as disruptions.

Today, technology advances more quickly. Technological development was never a linear idea, but because most of the tech advances of the last century were at the bottom of the exponential curve, we didn’t notice them. New technologies that are under development right now are going to being realized more quickly (especially the ones with big funding), and because of the idea of convergence, (the intermixing of unrelated technologies) their consequences will be less predictable.

One of my favorite futurists is Amy Webb whom I have written about before. In her most recent newsletter, Amy reminds us that the Internet was clunky and vague long before it was disruptive. She states,

“However, our modern Internet was being built without the benefit of some vital voices: journalists, ethicists, economists, philosophers, social scientists. These outside voices would have undoubtedly warned of the probable rise of botnets, Internet trolls and Twitter diplomacy––would the architects of our modern internet have done anything differently if they’d confronted those scenarios?”

Amy inadvertently left out the design profession, though I’m sure she will reconsider after we chat. Indeed, it is the design profession that is a key contributor to transformative tech and design thinkers, along with the ethicists and economists can help to visualize and reframe future visions.

Amy thinks that voice will be the next transformation will be our voice,

“From here forward, you can be expected to talk to machines for the rest of your life.”

Amy is referring to technologies like Alexa, Siri, Google, Cortana, and something coming soon called Bixby. The voices of these technologies are, of course, only the window dressing for artificial intelligence. But she astutely points out that,

“…we also know from our existing research that humans have a few bad habits. We continue to encode bias into our algorithms. And we like to talk smack to our machines. These machines are being trained not just to listen to us, but to learn from what we’re telling them.”

Such a merger might just be the mix of any technology (name one) with human nature or the human condition: AI meets Mike who lives across the hall. AI becoming acquainted with Mike may have been inevitable, but the fact that Mike happens to be a jerk was less predictable and so the outcome less so. The most significant disruptions of the future are going to come from the convergence of seemingly unrelated technologies. Sometimes innovation depends on convergence, like building an artificial human that will have to master a lot of different functions. Other times, convergence is accidental or at least unplanned. The engineers over at Boston Dynamics who are building those intimidating walking robots are focused a narrower set of criteria than someone creating an artificial human. Perhaps power and agility are their primary concern. Then, in another lab, there are technologists working on voice stress analysis, and in another setting, researchers are looking to create an AI that can choose your wardrobe. Somewhere else we are working on facial recognition or Augmented Reality or Virtual Reality or bio-engineering, medical procedures, autonomous vehicles or autonomous weapons. So it’s a lot like Harry meets Sally, you’re not sure what you’re going to get or how it’s going to work.

Digital visionary Kevin Kelly thinks that AI will be at the core of the next industrial revolution. Place the prefix “smart” in front of anything, and you have a new application for AI: a smart car, a smart house, a smart pump. These seem like universally useful additions, so far. But now let’s add the same prefix to the jobs you and I do, like a doctor, lawyer, judge, designer, teacher, or policeman. (Here’s a possible use for that ominous walking robot.) And what happens when AI writes better code than coders and decides to rewrite itself?

Hopefully, you’re getting the picture. All of this underscores Amy Webb’s earlier concerns. The ‘journalists, ethicists, economists, philosophers, social scientists’ and designers are rarely in the labs where the future is taking place. Should we be doing something fundamentally differently in our plans for innovative futures?

Side note: Convergence can happen in a lot of ways. The parent corporation of Boston Dynamics is X. I’ll use Wikipedia’s definition of X: “X, an American semi-secret research-and-development facility founded by Google in January 2010 as Google X, operates as a subsidiary of Alphabet Inc.”

Bookmark and Share