Tag Archives: abstract reasoning

Promises. Promises.

Throughout the course of the week, usually on a daily basis, I collect articles, news blurbs and what I call “signs from the future.” Mostly they fall into categories such as design fiction, technology, society, future, theology, and philosophy. I use this content sometimes for this blog, possibly for a lecture but most often for additional research as part of scholarly papers and presentations that are a matter of course as a professor. I have to weigh what goes into the blog because most of these topics could easily become full-blown papers.  Of course, the thing with scholarly writing is that most publications demand exclusivity on publishing your ideas. Essentially, that means that it becomes difficult to repurpose anything I write here for something with more gravitas.  One of the subjects that are of growing interest to me is Google. Not the search engine, per se, rather the technological mega-corp. It has the potential to be just such a paper, so even though there is a lot to say, I’m going to land on only a few key points.

A ubiquitous giant in the world of the Internet, Google has some of the most powerful algorithms, stores your most personal information, and is working on many of the most advanced technologies in the world. They try very hard to be soft-spoken, and low-key, but it belies their enormous power.

Most of us would agree that technology has provided some marvelous benefits to society especially in the realms of medicine, safety, education and other socially beneficial applications. Things like artificial knees, cochlear implants, air bags (when they don’t accidentally kill you), and instantaneous access to the world’s libraries have made life-changing improvements. Needless to say, especially if you have read my blog for any amount of time, technology also can have a downside. We may see greater yields from our agricultural efforts, but technological advancements also pump needless hormones into the populace, create sketchy GMO foodstuffs and manipulate farmers into planting them. We all know the problems associated with automobile emissions, atomic energy, chemotherapy and texting while driving. These problems are the obvious stuff. What is perhaps more sinister are the technologies we adopt that work quietly in the background to change us. Most of them we are unaware of until, one day, we are almost surprised to see how we have changed, and maybe we don’t like it. Google strikes me as a potential contributor in this latter arena. A recent article from The Guardian, entitled “Where is Google Taking Us?” looks at some of their most altruistic technologies (the ones they allowed the author to see). The author, Tim Adams, brought forward some interesting quotes from key players at Google. When discussing how Google would spend some $62 million in cash that it had amassed, Larry Page, one of the company’s co-founders asked,

“How do we use all these resources… and have a much more positive impact on the world?”

There’s nothing wrong with that question. It’s the kind of question that you would want a billionaire asking. My question is, “What does positive mean, and who decides what is and what isn’t?” In this case, it’s Google. The next quote comes from Sundar Pichai. With so many possibilities that this kind of wealth affords, Adams asked how they stay focused on what to do next.

“’Focus on the user and all else follows…We call it the toothbrush test,’ Pichai says, ‘we want to concentrate our efforts on things that billions of people use on a daily basis.’”

The statement sounds like savvy marketing. He is also talking about the most innate aspects of our everyday behavior. And so that I don’t turn this into an academic paper, here is one more quote. This time the author is talking to Dmitri Dolgov, principal engineer for Google Self-Driving Cars. For the whole idea to work, that is, the car reacting like a human would, only better, it has to think.

“Our maps have information stored and as the car drives around it builds up another local map with its sensors and aligns one to the other – that gives us a location accuracy of a centimetre or two. Beyond that, we are making huge numbers of probabilistic calculations every second.”

Mapping everything down to the centimeter.
Mapping everything down to the centimeter.

It’s the last line that we might want to ponder. Predictive algorithms are what artificial intelligence is all about, the kind of technology that plugs-in to a whole host of different applications from predicting your behavior to your abilities. If we don’t want to have to remember to check the oil, there is a light that reminds us. If we don’t want to have to remember somebody’s name, there is a facial recognition algorithm to remember for us. If my wearable detects that I am stressed, it can remind me to take a deep breath. If I am out for a walk, maybe something should mention all the things I could buy while I’m out (as well as what I am out of). 

Here’s what I think about. It seems to me that we are amassing two lists: the things we don’t want to think about, and the things we do. Folks like Google are adding things to Column A, and it seems to be getting longer all the time. My concern is whether we will have anything left in Column B.

 

Bookmark and Share

Breathing? There’s an app for that.

As the Internet of Things (IoT) and Ubiquitous Computing (UbiComp) continue to advance there really is no more room left for surprise. These things are cascading out of Silicon Valley, crowd-funding sites, labs, and start-ups with continually accelerating speed. And like Kurzweil, I think it’s happening faster than 95 percent of the world is expecting. A fair number of these are duds and frankly superfluous attempts at “computing” what otherwise, with a little mental effort, we could do on our own. Ian Bogost’s article, this week in the Atlantic Monthly,The Internet of Things You Don’t Really Need points out how many of these “innovations” are replacing just the slightest amount of extra brain power, ever-so-minimal physical activity, or prescient concentration. Not to mention that these apps just supply another entry into your personal, digital footprint. More in the week’s news (this stuff is happening everywhere) this time in FastCompany, an MIT alumn who is concerned about how little “face time” her kids are getting with real humans because they are constantly in front of screens or tablets. (Human to human interaction is important for development of emotional intelligence.) The solution? If you think it is less time on the tablet and more “go out and play”, you are behind the times. The researcher, Rana el Kaliouby, has decided that she has the answer:

“Instead, she believes we should be working to make computers more emotionally intelligent. In 2009, she cofounded a company called Affectiva, just outside Boston, where scientists create tools that allow computers to read faces, precisely connecting each brow furrow or smile line to a specific emotion.”

Of course it is. Now, what we don’t know, don’t want to learn (by doing), or just don’t want to think about, our computer, or app, will do for us. The FastCo author Elizabeth Segran, interviewed el Kaliouby:

“The technology is able to deduce emotions that we might not even be able to articulate, because we are not fully aware of them,” El Kaliouby tells me. “When a viewer sees a funny video, for instance, the Affdex might register a split second of confusion or disgust before the viewer smiles or laughs, indicating that there was actually something disturbing to them in the video.”

Oh my.

“At some point in the future, El Kaliouby suggests fridges might be equipped to sense when we are depressed in order to prevent us from binging on chocolate ice cream. Or perhaps computers could recognize when we are having a bad day, and offer a word of empathy—or a heartwarming panda video.”

Please no.

By the way, this is exactly the type of technology that is at the heart of the mesh, the ubiquitous surveillance system in The Lightstream Chronicles. In addition to having learned every possible variation of human emotion, this software has also learned physical behavior such that it can tell when, or if someone is about to shoplift, attack, or threaten another person. It can even tell if you have any business being where you are or not.

So,  before we get swept up in all of the heartwarming possibilities for relating to our computers, (shades of Her), and just in case anyone is left who is alarmed at becoming a complete emotional, intellectual and physical muffin, there is significant new research that suggests that the mind is a muscle. You use it or lose it, that you can strengthen learning and intelligence by exercising and challenging your mind and cognitive skills. If my app is going remind me not to be rude, when to brush my teeth, drink water, stop eating, and go to the toilet, what’s left? The definition of post-human comes to mind.

As a designer, I see warning flags. It is precisely a designer’s ability for abstract reasoning that makes problem solving both gratifying and effective. Remember McGyver? You don’t have to, your life hacks app will tell you what you need to do. You might also want to revisit a previous blog on computers that are taking our jobs away.

macgyver
McGyver. If you don’t know, you’re going to have to look it up.

Yet, it would seem that many people think that the only really important human trait is happiness, that ill-defined, elusive, and completely arbitrary emotion. As long as we retain that, all those other human traits we should evolve out of anyway.

What do you think?

Bookmark and Share