Superintelligence. Is it the last invention we will ever need to make?

I believe it is crucial that we move beyond preparation to adapt or react to the future but to actively engage in shaping it.

An excellent example of this kind of thinking is Nick Bostrom’s TED talk from 2015.

Bostrom is concerned about the day when machine intelligence exceeds human intelligence (the guess is somewhere between twenty and thirty years from now). He points out that, “Once there is super-intelligence, the fate of humanity may depend on what the super-intelligence does. Think about it: Machine intelligence is the last invention that humanity will ever need to make. Machines will then be better at inventing [designing] than we are, and they’ll be doing so on digital timescales.”

His concern is legitimate. How do we control something that is smarter than we are? Anticipating AI will require more strenuous design thinking than that which produces the next viral game, app, or service. But these applications are where the lion’s share of the money is going. When it comes to keeping us from being at best irrelevant or at worst an impediment to AI, Bostrom is guardedly optimistic about how we can approach it. He thinks we could, “[…]create an A.I. that uses its intelligence to learn what we value, and its motivation system is constructed in such a way that it is motivated to pursue our values or to perform actions that it predicts we would approve of.”

At the crux of his argument and mine: “Here is the worry: Making superintelligent A.I. is a really hard challenge. Making superintelligent A.I. that is safe involves some additional challenge on top of that. The risk is that if somebody figures out how to crack the first challenge without also having cracked the additional challenge of ensuring perfect safety.”

Beyond machine learning (which has many facets), there are a wide-ranging set of technologies, from genetic engineering to drone surveillance, to next-generation robotics, and even VR, that could be racing forward without someone thinking about this “additional challenge.”

This could be an excellent opportunity for designers. But, to do that, we will have to broaden our scope to engage with science, engineering, and politics. More on that in future blogs.

Bookmark and Share

Monitoring you.

One of my students is writing a paper on the rise of baby monitors and how technology has changed what and how we monitor. In the 80s, it the baby monitor was essentially a walkie-talkie in the “on” position. About all you could do is listen to breathing in another room. Today features that link to your smartphone include Bluetooth, night vision, motion detection, cloud storage and pulse oximetry.

I started thinking about what point in a child’s life a parent might stop monitoring. Most day care centers now allow remote login for parents to watch what their toddlers are up to and as they get older and have a smartphone (for emergencies, of course, )parents also have the ability to track their location. According to 2016 study by the Pew Research Center, “[…]parents today report taking a number of steps to influence their child’s digital behavior, from checking up on what their teen is posting on social media to limiting the amount of time their child spends in front of various screens.”

Having raised kids in the digital age, to me, this makes perfect sense. There are lots of dark alleys in the digital realm that can be detrimental to young eyes. Of course, when we realized that they were going off to college, it made some sense to believe that becoming an adult meant being responsible for your behavior. Needless to say, there are a lot of painful lessons on this journey.

Some say that the world is becoming an increasingly dangerous place. Should monitoring be something we become accustomed to, even for ourselves, all the time? What types of technologies might we accept to enable this? When should it stop? When do we need to know, and about whom?

What do you think?

Bookmark and Share