Tag Archives: IoT

Disruption. Part 2.

 

Last week I discussed the idea of technological disruption. Essentially, they are innovations that make fundamental changes in the way we work or live. In turn, these changes affect culture and behavior. Issues of design and culture are the stuff that interests me and my research: how easily and quickly our practices change as a result of the way we enfold technology. The advent of the railroad, mass produced automobiles, radio, then television, the Internet, and the smartphone all qualify as disruptions.

Today, technology advances more quickly. Technological development was never a linear idea, but because most of the tech advances of the last century were at the bottom of the exponential curve, we didn’t notice them. New technologies that are under development right now are going to being realized more quickly (especially the ones with big funding), and because of the idea of convergence, (the intermixing of unrelated technologies) their consequences will be less predictable.

One of my favorite futurists is Amy Webb whom I have written about before. In her most recent newsletter, Amy reminds us that the Internet was clunky and vague long before it was disruptive. She states,

“However, our modern Internet was being built without the benefit of some vital voices: journalists, ethicists, economists, philosophers, social scientists. These outside voices would have undoubtedly warned of the probable rise of botnets, Internet trolls and Twitter diplomacy––would the architects of our modern internet have done anything differently if they’d confronted those scenarios?”

Amy inadvertently left out the design profession, though I’m sure she will reconsider after we chat. Indeed, it is the design profession that is a key contributor to transformative tech and design thinkers, along with the ethicists and economists can help to visualize and reframe future visions.

Amy thinks that voice will be the next transformation will be our voice,

“From here forward, you can be expected to talk to machines for the rest of your life.”

Amy is referring to technologies like Alexa, Siri, Google, Cortana, and something coming soon called Bixby. The voices of these technologies are, of course, only the window dressing for artificial intelligence. But she astutely points out that,

“…we also know from our existing research that humans have a few bad habits. We continue to encode bias into our algorithms. And we like to talk smack to our machines. These machines are being trained not just to listen to us, but to learn from what we’re telling them.”

Such a merger might just be the mix of any technology (name one) with human nature or the human condition: AI meets Mike who lives across the hall. AI becoming acquainted with Mike may have been inevitable, but the fact that Mike happens to be a jerk was less predictable and so the outcome less so. The most significant disruptions of the future are going to come from the convergence of seemingly unrelated technologies. Sometimes innovation depends on convergence, like building an artificial human that will have to master a lot of different functions. Other times, convergence is accidental or at least unplanned. The engineers over at Boston Dynamics who are building those intimidating walking robots are focused a narrower set of criteria than someone creating an artificial human. Perhaps power and agility are their primary concern. Then, in another lab, there are technologists working on voice stress analysis, and in another setting, researchers are looking to create an AI that can choose your wardrobe. Somewhere else we are working on facial recognition or Augmented Reality or Virtual Reality or bio-engineering, medical procedures, autonomous vehicles or autonomous weapons. So it’s a lot like Harry meets Sally, you’re not sure what you’re going to get or how it’s going to work.

Digital visionary Kevin Kelly thinks that AI will be at the core of the next industrial revolution. Place the prefix “smart” in front of anything, and you have a new application for AI: a smart car, a smart house, a smart pump. These seem like universally useful additions, so far. But now let’s add the same prefix to the jobs you and I do, like a doctor, lawyer, judge, designer, teacher, or policeman. (Here’s a possible use for that ominous walking robot.) And what happens when AI writes better code than coders and decides to rewrite itself?

Hopefully, you’re getting the picture. All of this underscores Amy Webb’s earlier concerns. The ‘journalists, ethicists, economists, philosophers, social scientists’ and designers are rarely in the labs where the future is taking place. Should we be doing something fundamentally differently in our plans for innovative futures?

Side note: Convergence can happen in a lot of ways. The parent corporation of Boston Dynamics is X. I’ll use Wikipedia’s definition of X: “X, an American semi-secret research-and-development facility founded by Google in January 2010 as Google X, operates as a subsidiary of Alphabet Inc.”

Bookmark and Share

Breathing? There’s an app for that.

As the Internet of Things (IoT) and Ubiquitous Computing (UbiComp) continue to advance there really is no more room left for surprise. These things are cascading out of Silicon Valley, crowd-funding sites, labs, and start-ups with continually accelerating speed. And like Kurzweil, I think it’s happening faster than 95 percent of the world is expecting. A fair number of these are duds and frankly superfluous attempts at “computing” what otherwise, with a little mental effort, we could do on our own. Ian Bogost’s article, this week in the Atlantic Monthly,The Internet of Things You Don’t Really Need points out how many of these “innovations” are replacing just the slightest amount of extra brain power, ever-so-minimal physical activity, or prescient concentration. Not to mention that these apps just supply another entry into your personal, digital footprint. More in the week’s news (this stuff is happening everywhere) this time in FastCompany, an MIT alumn who is concerned about how little “face time” her kids are getting with real humans because they are constantly in front of screens or tablets. (Human to human interaction is important for development of emotional intelligence.) The solution? If you think it is less time on the tablet and more “go out and play”, you are behind the times. The researcher, Rana el Kaliouby, has decided that she has the answer:

“Instead, she believes we should be working to make computers more emotionally intelligent. In 2009, she cofounded a company called Affectiva, just outside Boston, where scientists create tools that allow computers to read faces, precisely connecting each brow furrow or smile line to a specific emotion.”

Of course it is. Now, what we don’t know, don’t want to learn (by doing), or just don’t want to think about, our computer, or app, will do for us. The FastCo author Elizabeth Segran, interviewed el Kaliouby:

“The technology is able to deduce emotions that we might not even be able to articulate, because we are not fully aware of them,” El Kaliouby tells me. “When a viewer sees a funny video, for instance, the Affdex might register a split second of confusion or disgust before the viewer smiles or laughs, indicating that there was actually something disturbing to them in the video.”

Oh my.

“At some point in the future, El Kaliouby suggests fridges might be equipped to sense when we are depressed in order to prevent us from binging on chocolate ice cream. Or perhaps computers could recognize when we are having a bad day, and offer a word of empathy—or a heartwarming panda video.”

Please no.

By the way, this is exactly the type of technology that is at the heart of the mesh, the ubiquitous surveillance system in The Lightstream Chronicles. In addition to having learned every possible variation of human emotion, this software has also learned physical behavior such that it can tell when, or if someone is about to shoplift, attack, or threaten another person. It can even tell if you have any business being where you are or not.

So,  before we get swept up in all of the heartwarming possibilities for relating to our computers, (shades of Her), and just in case anyone is left who is alarmed at becoming a complete emotional, intellectual and physical muffin, there is significant new research that suggests that the mind is a muscle. You use it or lose it, that you can strengthen learning and intelligence by exercising and challenging your mind and cognitive skills. If my app is going remind me not to be rude, when to brush my teeth, drink water, stop eating, and go to the toilet, what’s left? The definition of post-human comes to mind.

As a designer, I see warning flags. It is precisely a designer’s ability for abstract reasoning that makes problem solving both gratifying and effective. Remember McGyver? You don’t have to, your life hacks app will tell you what you need to do. You might also want to revisit a previous blog on computers that are taking our jobs away.

macgyver
McGyver. If you don’t know, you’re going to have to look it up.

Yet, it would seem that many people think that the only really important human trait is happiness, that ill-defined, elusive, and completely arbitrary emotion. As long as we retain that, all those other human traits we should evolve out of anyway.

What do you think?

Bookmark and Share