Tag Archives: Siri

Future proof.

 

There is no such thing as future proof anything, of course, so I use the term to refer to evidence that a current idea is becoming more and more probable of something we will see in the future. The evidence I am talking about surfaced in a FastCo article this week about biohacking and the new frontier of digital implants. Biohacking has a loose definition and can reference using genetic material without regard to ethical procedures, to DIY biology, to pseudo-bioluminescent tattoos, to body modification for functional enhancement—see transhumanism. Last year, my students investigated this and determined that a society willing to accept internal implants was not a near-future scenario. Nevertheless, according to FastCo author Steven Melendez,

“a survey released by Visa last year that found that 25% of Australians are ‘at least slightly interested’ in paying for purchases through a chip implanted in their bodies.”

Melendez goes on to describe a wide variety of implants already in use for medical, artistic and personal efficiency and interviews Tim Shank, president of a futurist group called TwinCities+. Shank says,

“[For] people with Android phones, I can just tap their phone with my hand, right over the chip, and it will send that information to their phone..”

implants
Amal Graafstra’s Hands [Photo: courtesy of Amal Graafstra] c/o WIRED
The popularity of body piercings and tattoos— also once considered as invasive procedures—has skyrocketed. Implantable technology, especially as it becomes more functionally relevant could follow a similar curve.

I saw this coming some years ago when writing The Lightstream Chronicles. The story, as many of you know, takes place in the far future where implantable technology is mundane and part of everyday life. People regulate their body chemistry access the Lightstream (the evolved Internet) and make “calls” using their fingertips embedded with Luminous Implants. These future implants talk directly to implants in the brain, and other systemic body centers to make adjustments or provide information.

An ad for Luminous Implants, and the "tap" numbers for local attractions.
An ad for Luminous Implants, and the “tap” numbers for local attractions.
Bookmark and Share

When the stakes are low, mistakes are beneficial. In more weighty pursuits, not so much.

 

I’m from the old school. I suppose, that sentence alone makes me seem like a codger. Let’s call it the eighties. Part of the art of problem solving was to work toward a solution and get it as tight as we possibly could before we committed to implementation. It was called the design process and today it’s called “design thinking.” So it was heresy to me when I found myself, some years ago now, in a high-tech corporation where this was a doctrine ignored. I recall a top-secret, new product meeting in which the owner and chief technology officer said, “We’re going to make some mistakes on this, so let’s hurry up and make them.” He was not speaking about iterative design, which is part and parcel of the design process, he was talking about going to market with the product and letting the users illuminate what we should fix. Of course, the product was safe and met all the legal standards, but it was far from polished. The idea was that mass consumer trial-by-fire would provide us with an exponentially higher data return than if we tested all the possible permutations in a lab at headquarters. He was, apparently, ahead of his time.

In a recent FastCo article on Facebook’s race to be the leader in AI, author Daniel Terdiman cites some of Mark Zuckerberg’s mantras: “‘Move fast and break things,’ or ‘Done is better than perfect.’” We can debate this philosophically or maybe even ethically, but it is clearly today’s standard procedure for new technologies, new science and the incessant race to be first. Here is a quote from that article:

“Artificial intelligence has become a vital part of scaling Facebook. It’s already being used to recognize the faces of your friends in photographs, and curate your newsfeed. DeepText, an engine for reading text that was unveiled last week, can understand “with near-human accuracy” the content in thousands of posts per second, in more than 20 different languages. Soon, the text will be translated into a dozen different languages, automatically. Facebook is working on recognizing your voice and identifying people inside of videos so that you can fast forward to the moment when your friend walks into view.”

The story goes on to say that Facebook, though it is pouring tons of money into AI, is behind the curve, having begun only three or so years ago. Aside from the fact that FB’s accomplishments seem fairly impressive (at least to me), people like Google and Microsoft are apparently way ahead. In the case of Microsoft, the effort began more than twenty years ago.

Today, the hurry up is accelerated by open sourcingWikipedia explains the benefits of open sourcing as:

“The open-source model, or collaborative development from multiple independent sources, generates an increasingly more diverse scope of design perspective than any one company is capable of developing and sustaining long term.”

The idea behind open sourcing is that the mistakes will happen even faster along with the advancements. It is becoming the de facto approach to breakthrough technologies. If fast is the primary, maybe even the only goal, it is a smart strategy. Or is it a touch short sighted? As we know, not everyone who can play with the code that a company has given them has that company’s best interests in mind. As for the best interests of society, I’m not sure those are even on the list.

To examine our motivations and the ripples that emanate from them, of course, is my mission with design fiction and speculative futures. Whether we like it or not, a by-product of technological development—aside from utopia—is human behavior. There are repercussions from the things we make and the systems that evolve from them. When your mantra is “Move fast and break things,” that’s what you’ll get. But there is certainly no time the move-fast loop to consider the repercussions of your actions, or the unexpected consequences. Consequences will appear all by themselves.

The technologists tell us that when we reach the holy grail of AI (whatever that is), we will be better people and solve the world’s most challenging problems. But in reality, it’s not that simple. With the nuances of AI, there are potential problems, or mistakes, that could be difficult to fix; new predicaments that humans might not be able to solve and AI may not be inclined to resolve on our behalf.

In the rush to make mistakes, how grave will they be? And, who is responsible?

Bookmark and Share

Nine years from now.

 

Today I’m on my soapbox, again, as an advocate of design thinking, of which design fiction is part of the toolbox.

In 2014, the Pew Research Center published a report on Digital Life in 2025. Therein, “The report covers experts’ views about advances in artificial intelligence (AI) and robotics, and their impact on jobs and employment.” Their nutshell conclusion was that:

“Experts envision automation and intelligent digital agents permeating vast areas of our work and personal lives by 2025, (9 years from now), but they are divided on whether these advances will displace more jobs than they create.”

On the upside, some of the “experts” believe that we will, as the brilliant humans that we are, invent new kinds of uniquely human work that we can’t replace with AI—a return to an artisanal society with more time for leisure and our loved ones. Some think we will be freed from the tedium of work and find ways to grow in some other “socially beneficial” pastime. Perhaps we will just be chillin’ with our robo-buddy.

On the downside, there are those who believe that not only blue-collar, robotic, jobs will vanish, but also white-collar, thinking jobs, and that will leave a lot of people out of work since there are only so many jobs as clerks at McDonald’s or greeters at Wal-Mart. They think that some of these are the fault of education for not better preparing us for the future.

A few weeks ago I blogged about people who are thinking about addressing these concerns with something called Universal Basic Income (UBI), a $12,000 gift to everyone in the world since everyone will be out of work. I’m guessing (though it wasn’t explicitly stated) that this money would come from all the corporations that are raking in the bucks by employing the AI’s, the robots and the digital agents, but who don’t have anyone on the payroll anymore. The advocates of this idea did not address whether the executives at these companies, presumably still employed, will make more than $12,000, nor whether the advocates themselves were on the 12K list. I guess not. They also did not specify who would buy the services that these corporations were offering if we are all out of work. But I don’t want to repeat that rant here.

I’m not as optimistic about the unique capabilities of humankind to find new, uniquely human jobs in some new, utopian artisanal society. Music, art, and blogs are already being written by AI, by the way. I do agree, however, that we are not educating our future decision-makers to adjust adequately to whatever comes along. The process of innovative design thinking is a huge hedge against technology surprise, but few schools have ever entertained the notion and some have never even heard of it. In some cases, it has been adopted, but as a bastardized hybrid to serve business-as-usual competitive one-upmanship.

I do believe that design, in its newest and most innovative realizations, is the place for these anticipatory discussions and future. What we need is thinking that encompasses a vast array of cross-disciplinary input, including philosophy and religion, because these issues are more than black and white, they are ethically and morally charged, and they are inseparable from our culture—the scaffolding that we as a society use to answer our most existential questions. There is a lot of work to do to survive ourselves.

 

 

Bookmark and Share

Sitting around with my robo-muse and collecting a check.

 

Writing a weekly blog can be a daunting task especially amid teaching, research and, of course, the ongoing graphic novel. I can only imagine the challenge for those who do it daily. Thank goodness for friends who send me articles. This week the piece comes from The New York Times tech writer Farhad Manjoo. The article is entitled, “A Plan in Case Robots Take the Jobs: Give Everyone a Paycheck.” The topic follows nicely on the heels of last week’s blog about the inevitability of robot-companions. Unfortunately, both the author and the people behind this idea appear to be woefully out of touch with reality.

Here is the premise: After robots and AI have become ubiquitous and mundane, what will we do with ourselves? “How will society function after humanity has been made redundant? Technologists and economists have been grappling with this fear for decades, but in the last few years, one idea has gained widespread interest — including from some of the very technologists who are now building the bot-ruled future,” asks Manjoo.

The answer, strangely enough, seems to be coming from venture capitalists and millionaires like Albert Wenger, who is writing a book on the idea of U.B. I. — universal basic income — and Sam Altman, president of the tech incubator Y Combinator. Apparently, they think that $1,000 a month would be about right, “…about enough to cover housing, food, health care and other basic needs for many Americans.”

This equation, $12,000 per year, possibly works for the desperately poor in rural Mississippi. Perhaps it is intended for some 28-year-old citizen with no family or social life. Of course, there would be no money for that iPhone, or cable service. Such a mythical person has a $300 rent controlled apartment (utilities included), benefits covered by the government, doesn’t own a car, or buy gas or insurance, and then maybe doesn’t eat either. Though these millionaires clearly have no clue about what it costs the average American to eek out a living, they have identified some other fundamental questions:

“When you give everyone free money, what do people do with their time? Do they goof off, or do they try to pursue more meaningful pursuits? Do they become more entrepreneurial? How would U.B.I. affect economic inequality? How would it alter people’s psychology and mood? Do we, as a species, need to be employed to feel fulfilled, or is that merely a legacy of postindustrial capitalism?”

The Times article continues with, “Proponents say these questions will be answered by research, which in turn will prompt political change. For now, they argue the proposal is affordable if we alter tax and welfare policies to pay for it, and if we account for the ways technological progress in health care and energy will reduce the amount necessary to provide a basic cost of living.”

Often, the people that float ideas like this paint them as utopia, but I have a couple of additional questions. Why are venture capitalists interested in this notion? Will they also reduce their income to $1,000 per month? Seriously, that never happens. Instead, we see progressives in government and finance using an equation like this: “One thousand for you. One hundred thousand for me. One thousand for you. One hundred thousand for me…”

Fortunately, it is an unlikely scenario, because it would not move us toward equality but toward a permanent under-class forever dependent on those who have. Scary.

Bookmark and Share

Your robot buddy is almost ready.

 

There is an impressive video out this week showing a rather adept robot going through the paces, so to speak, getting smacked down and then getting back up again. The year is 2016, and the technology is striking. The company is Boston Dynamics. One thing you know from reading this blog is that I ascribe to The Law of Accelerating Returns. So, as we watch this video, if we want to be hyper-critical we can see that the bot still needs some shepherding, tentatively handles the bumpy terrain, and is slow to get up after a violent shove to the floor. On the other hand, if you are at all impressed with the achievement, then you know that the people working on this will only make it better, agiler, more svelt, and probably faster on the comeback. Let’s call this ingredient one.

 

Last year I devoted several blogs to the advancement of AI, about the corporations rushing to be the go-to source for the advanced version of predictive behavior and predictive decision-making. I have also discussed systems like Amazon Echo that use the advanced Alexa Voice Service. It’s something like Siri on steroids. Here’s part of Amazon’s pitch:

“• Hears you from across the room with far-field voice recognition, even while music is playing
• Answers questions, reads audiobooks and the news, reports traffic and weather, gives info on local businesses, provides sports scores and schedules, and more with Alexa, a cloud-based voice service
• Controls lights, switches, and thermostats with compatible WeMo, Philips Hue, Samsung SmartThings, Wink, Insteon, and ecobee smart home devices
•Always getting smarter and adding new features and skills…”

You’re supposed to place it in a central position in the home where it is proficient at picking up your voice from the other room. Just don’t go too far. The problem with Echo is that its stationery. Call Echo ingredient two. What Echo needs is ingredient one.

Some of the biggest players in the AI race right now are Google, Amazon, Apple, Facebook, IBM, Elon Musk, and the military, but the government probably won’t give you a lot of details on that. All of these billion-dollar companies have a vested interest in machine learning and predictive algorithms. Not the least of which is Google. Google already uses Voice Search, “OK, Google.” to enable type-free searching. And their AI algorithm software TensorFlow has been open-sourced. The better that a machine can learn, the more reliable the Google autonomous vehicle will be. Any one of these folks could be ingredient three.

I’m going to try and close the loop on this, at least for today. Guess who bought Boston Dynamics last year? That would be corporation X, formerly known as Google X.

 

Bookmark and Share

Design fiction. Think now.

This week I gave my annual lecture to Foundations students on design fiction. The Foundations Program at The Ohio State University Department of Design is comprised primarily (though not entirely) of incoming freshmen aspiring to get into the program at the end of their first year. Out of roughly 90 hopefuls, as many as 60 could be selected.

 
Design fiction is something of an advanced topic for first-year students. It is a form of design research that goes beyond conventional forms of research and stretches into the theoretical. The stuff it yields (like all research) is knowledge, which should not be confused with the answer or the solution to a problem, rather it becomes one of the tools that designers can use in crafting better futures.

 
Knowledge is critical.
One of the things that I try to stress to students is the enormity of what we don’t know. At the end of their education students will know much more than they do know but there is an iceberg of information out of sight that we can’t even begin to comprehend. This is why research is so critical to design. The theoretical comes in when we try to think about the future, perhaps the thing we know the least about. We can examine the tangible present and the recorded past, but the future is a trajectory that is affected by an enormous number of variables outside our control. We like to think that we can predict it, but rarely are we on the mark. So design fiction is a way of visualizing the future along with its resident artifacts, and bring it into the present where we can examine it and ask ourselves if this is a future we want.

 
It is a different track. I recently attended the First International Conference on Anticipation. Anticipation is a completely new field of study. According to its founder Roberto Poli,

“An anticipatory behavior is a behavior that ‘uses’ the future in its actual decisional process. It is the process of using the future in the present, which includes a forward-looking stance and the use of that forwardlooking stance to effect a change in the present. Anticipation therefore includes two mandatory components: a forward-looking attitude and the use of the former’s result for action.”

For me, this highlights some key similarities in design fiction and anticipation. At one level, all futures are fictions. Using a future design— design that does not yet exist—to help us make decisions today is an appropriate a methodology for this new field. Concomitantly, designers need a sense of anticipation as they create new products, communications, places, experiences, organizations and systems.

 
The reality of technological convergence makes the future an unstable concept. The merging of cognitive science, genetics, nanotech, biotech, infotech, robotics, and artificial intelligence is like shuffling a dozen decks of cards. The combinations become mind-boggling. So while it may seem a bit advanced for first-year design students, from my perspective we cannot start soon enough to think about our profession as a crucial player in crafting what the future will look like. Design fiction—drawing from the future—will be an increasingly important tool.

Bookmark and Share

Micropigs. The forerunner to ordering blue skinned children.

 

Your favorite shade, of course.

Last week I tipped you off to Amy Webb, a voracious design futurist with tons of tidbits on the latest technologies that are affecting not only design but our everyday life. I saved a real whopper for today. I won’t go into her mention of CRISPR-Cas9 since I covered that a few months ago without Amy’s help, but here’s one that I found more than interesting.

Chinese genomic scientists have created some designer pigs. They are called ‘micro pigs’ and they are taking orders at $1,600 a pop for the little critters. It turns out that pigs are very close—genetically—to humans but the big fellow were cumbersome to study (and probably too expensive to feed) so the scientists bred a smaller version by turning of the growth gene in their DNA. Voilà: micropigs. Plus you can order

Micropigs. Photo from BPi and nature.com
Micropigs. Photo from BPI and nature.com

them in different colors (they can do that, too). Now, of course this is all to further research and all proceeds will go to more research to help fight disease in humans, at least until they sell the patent on micropigs to the highest bidder.

So now we have genetic engineering to make a micropig, fashion statement. Wait a minute. We could use genetic engineering for human fashion statements, too. After all, it’s a basic human right to be whatever color we want. Oh, no. We would never do that.

Next up is Googles’ new email respond feature coming soon to your gmail account.

Bookmark and Share

What did one AI say to the other AI?

 

I know what you want.

A design foundations student recently asked my advice on a writing assignment, something that might be affected by or affect design in the future. I told him to look up predictive algorithms. I have long contended that logic alone indicates that predictive algorithms, taking existing data and applying constraints, can be used to solve a problem, answer a question, or design something. With the advent of big data, the information going in only amplifies the veracity of the recommendations coming out. In case you haven’t noticed, big data is, well, big.

One of the design practitioners that I follow is Amy Webb. Amy has been thinking about this longer than I have but clearly, we think alike, and we are looking at the same things. I don’t know if she is as alarmed as I am. We’ve never spoken. In her recent newsletter, her focus was on what else, predictive algorithms. Amy alerted me to a whole trove of new developments. There were so many that I have decided to make it a series of blogs starting with this one.

Keep in mind, that as I write this these technologies are in their infancy. If the already impress you, then the future will likely blow you away. The first was something known as, Project Dreamcatcher from Autodesk. These are the people who make Maya, and AutoCAD and much of the software that designers, animators, engineers and architects use every day. According to the website:

“The Dreamcatcher system allows designers to input specific design objectives, including functional requirements, material type, manufacturing method, performance criteria, and cost restrictions. Loaded with design requirements, the system then searches a procedurally synthesized design space to evaluate a vast number of generated designs for satisfying the design requirements. The resulting design alternatives are then presented back to the user, along with the performance data of each solution, in the context of the entire design solution space.”

Another on Amy’s list was Google’s recently announced RankBrain, Google’s next venture into context-aware platforms using advances in predictive algorithms to make what you see scarily tailored to who you are. According to Amy from a 2012 article (this is old news folks):

“With the adoption of the Siri application, iOS 5 mobile phones (Apple only) can now compare location, interests, intentions, schedule, friends, history, likes, dislikes and more to serve content and answers to questions.”

In other words, there’s lots more going on than you think when Siri answers a question for you. Well RankBrain takes this to the next level, according to Bloomberg who broke the story on RankBrain:

“For the past few months, a “very large fraction” of the millions of queries a second that people type into the company’s search engine have been interpreted by an artificial intelligence system, nicknamed RankBrain…’Machine learning is a core transformative way by which we are rethinking everything we are doing,’ said Google’s Chief Executive Officer Sundar Pichai on the company’s earnings call last week.”

By the way, so far, most AI predicts much more accurately than we do, humans that is.

If this is moving too fast for you, next week, thanks to Amy, I’ll highlight some applications of AI that will have you squirming.

PS— if you wan to follow Amy Webb go here.

Bookmark and Share