Tag Archives: RankBrain

Paying attention.

I want to make a Tshirt. On the front, it will say, “7 years is a long time.” On the back, it will say, “Pay attention!”

What am I talking about? I’ll start with some background. This semester, I am teaching a collaborative studio with designers from visual communications, interior design, and industrial design. Our topic is Humane Technologies, and we are examining the effects of an Augmented Reality (AR) system that could be ubiquitous in 7 years. The process began with an immersive scan of the available information and emerging advances in AR, VR, IoT, human augmentation (HA) and, of course, AI. In my opinion, these are a just a few of the most transformative technologies currently attracting the heaviest investment across the globe. And where the money goes there goes the most rapid advancement.

A conversation starter.

One of the biggest challenges for the collaborative studio class (myself included) is to think seven years out. Although we read Kurzweil’s Law of Accelerating Returns, our natural tendency is to think linearly, not exponentially. One of my favorite Kurzweil illustrations is this:

“Exponentials are quite seductive because they start out sub-linear. We sequenced one ten-thousandth of the human genome in 1990 and two ten-thousandths in 1991. Halfway through the genome project, 7 ½ years into it, we had sequenced 1 percent. People said, “This is a failure. Seven years, 1 percent. It’s going to take 700 years, just like we said.” Seven years later it was done, because 1 percent is only seven doublings from 100 percent — and it had been doubling every year. We don’t think in these exponential terms. And that exponential growth has continued since the end of the genome project. These technologies are now thousands of times more powerful than they were 13 years ago, when the genome project was completed.”1

So when I hear a policymaker, say, “We’re a long way from that,” I cringe. We’re not a long way away from that. The iPhone was introduced on June 29, 2007, not quite ten years ago. The ripple-effects from that little technological marvel are hard to catalog. With the smartphone, we have transformed everything from social and behavioral issues to privacy and safety. As my students examine the next possible phase of our thirst for the latest and greatest, AR (and it’s potential for smartphone-like ubiquity), I want them to ask the questions that relate to supporting systems, along with the social and ethical repercussions of these transformations. At the end of it all, I hope that they will walk away with an appreciation for paying attention to what we make and why. For example, why would we make a machine that would take away our job? Why would we build a superintelligence? More often than not, I fear the answer is because we can.

Our focus on the technologies mentioned above is just a start. There are more than these, and we shouldn’t forget things like precise genetic engineering techniques such as CRISPR/Cas9 Gene Editing, neuromorphic technologies such as microprocessors configured like brains, the digital genome that could be the key to disease eradication, machine learning, and robotics.

Though they may sound innocuous by themselves, they each have gigantic implications for disruptions to society. The wild card in all of these is how they converge with each other and the results that no one anticipated. One such mutation would be when autonomous weapons systems (AI + robotics + machine learning) converge with an aggregation of social media activity to predict, isolate and eliminate a viral uprising.

From recent articles and research by the Department of Defense, this is no longer theoretical; we are actively pursuing it. I’ll talk more about that next week. Until then, pay attention.

 

1. http://www.bizjournals.com/sanjose/news/2016/09/06/exclusivegoogle-singularity-visionary-ray.htm
Bookmark and Share

Big-Data Algorithms. Don’t worry. Be happy.

 

It’s easier for us to let the data decide for us. At least that is the idea behind global digital design agency Huge. Aaron Shapiro is the CEO. He says, “The next big breakthrough in design and technology will be the creation of products, services, and experiences that eliminate the needless choices from our lives and make ones on our behalf, freeing us up for the ones we really care about: Anticipatory design.”

Buckminster Fuller wrote about Anticipatory Design Science, but this is not that. Trust me. Shapiro’s version is about allowing big data, by way of artificial intelligence and neural networks, to become so familiar with us and our preferences that it anticipates what we need to do next. In this vision, I don’t have to decide what to wear, or eat, or how to get to work, or when to buy groceries, or gasoline, what color trousers go with my shoes and also when it’s time to buy new shoes. No decisions will be necessary. Interestingly, Shapiro sees this as a good thing. The idea comes from a flurry of activity about something called decision fatigue. What is that? In a nutshell, it says that our decision-making capacity is a reservoir that gradually gets depleted the more decisions we make, possibly as a result of body chemistry. After a long string of decisions, according to the theory, we are more likely to make a bad decision or none at all. Things like willpower disintegrate along with our decision-making.

Among the many articles in the last few months on this topic was FastCompany, who wrote that,

“Anticipatory design is fundamentally different: decisions are made and executed on behalf of the user. The goal is not to help the user make a decision, but to create an ecosystem where a decision is never made—it happens automatically and without user input. The design goal becomes one where we eliminate as many steps as possible and find ways to use data, prior behaviors and business logic to have things happen automatically, or as close to automatic as we can get.”

Supposedly this frees “us up for the ones we really care about.”
My questions are, who decides which questions are important? And once we are freed from making decisions, will we even know that we have missed on that we really care about?

Google Now is a digital assistant that not only responds to a user’s requests and questions, but predicts wants and needs based on search history. Pulling flight information from emails, meeting times from calendars and providing recommendations of where to eat and what to do based on past preferences and current location, the user simply has to open the app for their information to compile.”

It’s easy to forget that AI as we currently know it goes under the name of Facebook or Google or Apple or Amazon. We tend to think of AI as some ghostly future figure or a bank of servers, or an autonomous robot. It reminds me a bit of my previous post about Nick Bostrom and the development of SuperIntelligence. Perhaps it is a bit like an episode of Person of Interest. As we think about designing systems that think for us and decide what is best for us, it might be a good idea to think about what it might be like to no longer think—as long as we still can.

 

 

Bookmark and Share

Yes, you too can be replaced.

Over the past weeks, I have begun to look at the design profession and design education in new ways. It is hard to argue with the idea that all design is future-based. Everything we design is destined for some point beyond now where the thing or space, the communication, or the service will exist. If it already existed, we wouldn’t need to design it. So design is all about the future. For most of the 20th century and the last 16 years, the lion’s share of our work as designers has focused primarily on very near-term, very narrow solutions: A better tool, a more efficient space, a more useful user interface or satisfying experience. In fact, the tighter the constraints, the narrower the problem statement and greater the opportunity to apply design thinking to resolve it in an elegant and hopefully aesthetically or emotionally pleasing way. Such challenges are especially gratifying for the seasoned professional as they have developed almost an intuitive eye toward framing these dilemmas from which novel and efficient solutions result. Hence, over the course of years or even decades, the designer amasses a sort of micro scale, big data assemblage of prior experiences that help him or her reframe problems and construct—alone or with a team—satisfactory methodologies and practices to solve them.

Coincidentally, this process of gaining experience is exactly the idea behind machine learning and artificial intelligence. But, since computers can amass knowledge from analyzing millions of experiences and judgments it is theoretically possible that an artificial intelligence could gain this “intuitive eye” to a degree far surpassing the capacity of an individual him-or-her designer.

That is the idea behind a brash (and annoyingly self-conscious) article from the American Institute of Graphic Arts (AIGA) entitled Automation Threatens To Make Graphic Designers Obsolete. Titles like this are a hook. Of course. Designers, deep down assume that they can never be replaced. They believe this because inherent to the core of artificial intelligence there is a lack of understanding, empathy or emotional verve, so far. We saw this earlier in 2016 when an AI chatbot went Nazi because a bunch of social media hooligans realized that Tay (the name of the Microsoft chatbot) was in learn mode. If you told “her” Nazi’s were cool, she believed you. It was proof, again, that junk in is junk out.

The AIGA author Rob Peart pointed to AutoDesk’s Dreamcatcher software that is capable of rapid prototyping surprisingly creative albeit roughly detailed prototypes. Peart features a quote from an Executive Creative Director for techno-ad-agency Sapient Nitro. “A designer’s role will evolve to that of directing, selecting, and fine tuning, rather than making. The craft will be in having vision and skill in selecting initial machine-made concepts and pushing them further, rather than making from scratch. Designers will become conductors, rather than musicians.”

I like the way we always position new technology in the best possible light. “You’re not going to lose your job. Your job is just going to change.” But tell that to the people who used to write commercial music, for example. The Internet has become a vast clearing house for every possible genre of music. It’s all available for a pittance of what it would have taken a musician to write, arrange and produce a custom piece of music. It’s called stock. There are stock photographs, stock logos, stock book templates, stock music, stock house plans, and the list goes on. All of these have caused a significant disruption to old methods of commerce, and some would say that these stock versions of everything lack the kind of polish and ingenuity that used to distinguish artistic endeavors. The artist’s who’s jobs they have obliterated refer to the work with a four-letter word.

Now, I confess I have used stock photography, and stock music, but I have also used a lot of custom photography and custom music as well. Still, I can’t imagine crossing the line to a stock logo or stock publication design. Perish the thought! Why? Because they look like four-letter-words; homogenized, templates, and the world does not need more blah. It’s likely that we also introduced these new forms of stock commerce in the best possible light, as great democratizing innovations that would enable everyone to afford music, or art or design. That anyone can make, create or borrow the things that professionals used to do.

As artificial intelligence becomes better at composing music, writing blogs and creating iterative designs (which it already does and will continue to improve), we should perhaps prepare for the day when we are no longer musicians or composers but rather simply listeners and watchers.

But let’s put that in the best possible light: Think of how much time we’ll have to think deep thoughts.

 

Bookmark and Share

Superintelligence. Is it the last invention we will ever need to make?

I believe it is crucial that we move beyond preparation to adapt or react to the future but to actively engage in shaping it.

An excellent example of this kind of thinking is Nick Bostrom’s TED talk from 2015.

Bostrom is concerned about the day when machine intelligence exceeds human intelligence (the guess is somewhere between twenty and thirty years from now). He points out that, “Once there is super-intelligence, the fate of humanity may depend on what the super-intelligence does. Think about it: Machine intelligence is the last invention that humanity will ever need to make. Machines will then be better at inventing [designing] than we are, and they’ll be doing so on digital timescales.”

His concern is legitimate. How do we control something that is smarter than we are? Anticipating AI will require more strenuous design thinking than that which produces the next viral game, app, or service. But these applications are where the lion’s share of the money is going. When it comes to keeping us from being at best irrelevant or at worst an impediment to AI, Bostrom is guardedly optimistic about how we can approach it. He thinks we could, “[…]create an A.I. that uses its intelligence to learn what we value, and its motivation system is constructed in such a way that it is motivated to pursue our values or to perform actions that it predicts we would approve of.”

At the crux of his argument and mine: “Here is the worry: Making superintelligent A.I. is a really hard challenge. Making superintelligent A.I. that is safe involves some additional challenge on top of that. The risk is that if somebody figures out how to crack the first challenge without also having cracked the additional challenge of ensuring perfect safety.”

Beyond machine learning (which has many facets), there are a wide-ranging set of technologies, from genetic engineering to drone surveillance, to next-generation robotics, and even VR, that could be racing forward without someone thinking about this “additional challenge.”

This could be an excellent opportunity for designers. But, to do that, we will have to broaden our scope to engage with science, engineering, and politics. More on that in future blogs.

Bookmark and Share

Privacy or paranoia?

 

If you’ve been a follower of this blog for a while, then you know that I am something of a privacy wonk. I’ve written about it before (about a dozen times, such as), and I’ve even built a research project (that you can enact yourself) around it. A couple of things transpired this week to remind me that privacy is tenuous. (It could also be all the back episodes of Person of Interest that I’ve been watching lately or David Staley’s post last April about the Future of Privacy.) First, I received an email from a friend this week alerting me to a little presumption that my software is spying on me. I’m old enough to remember when you purchased software on as a set of CDs (or even disks). You loaded in on your computer, and it seemed to last for years before you needed to upgrade. Let’s face it, most of us use only a small subset of the features in our favorite applications. I remember using Photoshop 5 for quite awhile before upgrading and the same with the rest of what is now called the Adobe Creative Suite. I still use the primary features of Photoshop 5, Illustrator 10 and InDesign (ver. whatever), 90% of the time. In my opinion, the add-ons to those apps have just slowed things down, and of course, the expense has skyrocketed. Gone are the days when you could upgrade your software every couple of years. Now you have to subscribe at a clip of about $300 a year for the Adobe Creative Suite. Apparently, the old business model was not profitable enough. But then came the Adobe Creative Cloud. (Sound of an angelic chorus.) Now it takes my laptop about 8 minutes to load into the cloud and boot up my software. Plus, it stores stuff. I don’t need it to store stuff for me. I have backup drives and archive software to do that for me.

Back to the privacy discussion. My friend’s email alerted me to this little tidbit hidden inside the Creative Cloud Account Manager.

Learn elsewhere, please.
Learn elsewhere, please.

Under the Security and Privacy tab, there are a couple of options. The first is Desktop App Usage. Here, you can turn this on or off. If it’s on, one of the things it tracks is,

“Adobe feature usage information, such as menu options or buttons selected.”

That means it tracks your keystrokes. Possibly this only occurs when you are using that particular app, but uh-uh, no thanks. Switch that off. Next up is a more pernicious option; it’s called, Machine Learning. Hmm. We all know what that is and I’ver written about that before, too. Just do a search. Here, Adobe says,

“Adobe uses machine learning technologies, such as content analysis and pattern recognition, to improve our products and services. If you prefer that Adobe not analyze your files to improve our products and services, you can opt-out of machine learning at any time.”

Hey, Adobe, if you want to know how to improve your products and services, how about you ask me, or better yet, pay me to consult. A deeper dive into ‘machine learning’ tells me more. Here are a couple of quotes:

“Adobe uses machine learning technologies… For example, features such as Content-Aware Fill in Photoshop and facial recognition in Lightroom could be refined using machine learning.”

“For example, we may use pattern recognition on your photographs to identify all images of dogs and auto-tag them for you. If you select one of those photographs and indicate that it does not include a dog, we use that information to get better at identifying images of dogs.”

Facial recognition? Nope. Help me find dog pictures? Thanks, but I think I can find them myself.

I know how this works. The more data that the machine can feed on the better it becomes at learning. I would just rather Adobe get their data by searching it out for it themselves. I’m sure they’ll be okay. (Afterall there’s a few million people who never look at their account settings.) Also, keep in mind, it’s their machine not mine.

The last item on my privacy rant just validated my paranoia. I ran across this picture of Facebook billionaire Mark Zuckerberg hamming it up for his FB page.

zuckerberg copy

 

In the background, is his personal laptop. Upon closer inspection, we see that Zuck has a piece of duct tape covering his laptop cam and his dual microphones on the side. He knows.

zuckcloseup

 

Go get a piece of tape.

Bookmark and Share

Nine years from now.

 

Today I’m on my soapbox, again, as an advocate of design thinking, of which design fiction is part of the toolbox.

In 2014, the Pew Research Center published a report on Digital Life in 2025. Therein, “The report covers experts’ views about advances in artificial intelligence (AI) and robotics, and their impact on jobs and employment.” Their nutshell conclusion was that:

“Experts envision automation and intelligent digital agents permeating vast areas of our work and personal lives by 2025, (9 years from now), but they are divided on whether these advances will displace more jobs than they create.”

On the upside, some of the “experts” believe that we will, as the brilliant humans that we are, invent new kinds of uniquely human work that we can’t replace with AI—a return to an artisanal society with more time for leisure and our loved ones. Some think we will be freed from the tedium of work and find ways to grow in some other “socially beneficial” pastime. Perhaps we will just be chillin’ with our robo-buddy.

On the downside, there are those who believe that not only blue-collar, robotic, jobs will vanish, but also white-collar, thinking jobs, and that will leave a lot of people out of work since there are only so many jobs as clerks at McDonald’s or greeters at Wal-Mart. They think that some of these are the fault of education for not better preparing us for the future.

A few weeks ago I blogged about people who are thinking about addressing these concerns with something called Universal Basic Income (UBI), a $12,000 gift to everyone in the world since everyone will be out of work. I’m guessing (though it wasn’t explicitly stated) that this money would come from all the corporations that are raking in the bucks by employing the AI’s, the robots and the digital agents, but who don’t have anyone on the payroll anymore. The advocates of this idea did not address whether the executives at these companies, presumably still employed, will make more than $12,000, nor whether the advocates themselves were on the 12K list. I guess not. They also did not specify who would buy the services that these corporations were offering if we are all out of work. But I don’t want to repeat that rant here.

I’m not as optimistic about the unique capabilities of humankind to find new, uniquely human jobs in some new, utopian artisanal society. Music, art, and blogs are already being written by AI, by the way. I do agree, however, that we are not educating our future decision-makers to adjust adequately to whatever comes along. The process of innovative design thinking is a huge hedge against technology surprise, but few schools have ever entertained the notion and some have never even heard of it. In some cases, it has been adopted, but as a bastardized hybrid to serve business-as-usual competitive one-upmanship.

I do believe that design, in its newest and most innovative realizations, is the place for these anticipatory discussions and future. What we need is thinking that encompasses a vast array of cross-disciplinary input, including philosophy and religion, because these issues are more than black and white, they are ethically and morally charged, and they are inseparable from our culture—the scaffolding that we as a society use to answer our most existential questions. There is a lot of work to do to survive ourselves.

 

 

Bookmark and Share

Sitting around with my robo-muse and collecting a check.

 

Writing a weekly blog can be a daunting task especially amid teaching, research and, of course, the ongoing graphic novel. I can only imagine the challenge for those who do it daily. Thank goodness for friends who send me articles. This week the piece comes from The New York Times tech writer Farhad Manjoo. The article is entitled, “A Plan in Case Robots Take the Jobs: Give Everyone a Paycheck.” The topic follows nicely on the heels of last week’s blog about the inevitability of robot-companions. Unfortunately, both the author and the people behind this idea appear to be woefully out of touch with reality.

Here is the premise: After robots and AI have become ubiquitous and mundane, what will we do with ourselves? “How will society function after humanity has been made redundant? Technologists and economists have been grappling with this fear for decades, but in the last few years, one idea has gained widespread interest — including from some of the very technologists who are now building the bot-ruled future,” asks Manjoo.

The answer, strangely enough, seems to be coming from venture capitalists and millionaires like Albert Wenger, who is writing a book on the idea of U.B. I. — universal basic income — and Sam Altman, president of the tech incubator Y Combinator. Apparently, they think that $1,000 a month would be about right, “…about enough to cover housing, food, health care and other basic needs for many Americans.”

This equation, $12,000 per year, possibly works for the desperately poor in rural Mississippi. Perhaps it is intended for some 28-year-old citizen with no family or social life. Of course, there would be no money for that iPhone, or cable service. Such a mythical person has a $300 rent controlled apartment (utilities included), benefits covered by the government, doesn’t own a car, or buy gas or insurance, and then maybe doesn’t eat either. Though these millionaires clearly have no clue about what it costs the average American to eek out a living, they have identified some other fundamental questions:

“When you give everyone free money, what do people do with their time? Do they goof off, or do they try to pursue more meaningful pursuits? Do they become more entrepreneurial? How would U.B.I. affect economic inequality? How would it alter people’s psychology and mood? Do we, as a species, need to be employed to feel fulfilled, or is that merely a legacy of postindustrial capitalism?”

The Times article continues with, “Proponents say these questions will be answered by research, which in turn will prompt political change. For now, they argue the proposal is affordable if we alter tax and welfare policies to pay for it, and if we account for the ways technological progress in health care and energy will reduce the amount necessary to provide a basic cost of living.”

Often, the people that float ideas like this paint them as utopia, but I have a couple of additional questions. Why are venture capitalists interested in this notion? Will they also reduce their income to $1,000 per month? Seriously, that never happens. Instead, we see progressives in government and finance using an equation like this: “One thousand for you. One hundred thousand for me. One thousand for you. One hundred thousand for me…”

Fortunately, it is an unlikely scenario, because it would not move us toward equality but toward a permanent under-class forever dependent on those who have. Scary.

Bookmark and Share

Your robot buddy is almost ready.

 

There is an impressive video out this week showing a rather adept robot going through the paces, so to speak, getting smacked down and then getting back up again. The year is 2016, and the technology is striking. The company is Boston Dynamics. One thing you know from reading this blog is that I ascribe to The Law of Accelerating Returns. So, as we watch this video, if we want to be hyper-critical we can see that the bot still needs some shepherding, tentatively handles the bumpy terrain, and is slow to get up after a violent shove to the floor. On the other hand, if you are at all impressed with the achievement, then you know that the people working on this will only make it better, agiler, more svelt, and probably faster on the comeback. Let’s call this ingredient one.

 

Last year I devoted several blogs to the advancement of AI, about the corporations rushing to be the go-to source for the advanced version of predictive behavior and predictive decision-making. I have also discussed systems like Amazon Echo that use the advanced Alexa Voice Service. It’s something like Siri on steroids. Here’s part of Amazon’s pitch:

“• Hears you from across the room with far-field voice recognition, even while music is playing
• Answers questions, reads audiobooks and the news, reports traffic and weather, gives info on local businesses, provides sports scores and schedules, and more with Alexa, a cloud-based voice service
• Controls lights, switches, and thermostats with compatible WeMo, Philips Hue, Samsung SmartThings, Wink, Insteon, and ecobee smart home devices
•Always getting smarter and adding new features and skills…”

You’re supposed to place it in a central position in the home where it is proficient at picking up your voice from the other room. Just don’t go too far. The problem with Echo is that its stationery. Call Echo ingredient two. What Echo needs is ingredient one.

Some of the biggest players in the AI race right now are Google, Amazon, Apple, Facebook, IBM, Elon Musk, and the military, but the government probably won’t give you a lot of details on that. All of these billion-dollar companies have a vested interest in machine learning and predictive algorithms. Not the least of which is Google. Google already uses Voice Search, “OK, Google.” to enable type-free searching. And their AI algorithm software TensorFlow has been open-sourced. The better that a machine can learn, the more reliable the Google autonomous vehicle will be. Any one of these folks could be ingredient three.

I’m going to try and close the loop on this, at least for today. Guess who bought Boston Dynamics last year? That would be corporation X, formerly known as Google X.

 

Bookmark and Share

Anticipation. Start by working with the future.

 

 

A couple of blogs ago I wrote about my experiment with the notion of ubiquitous surveillance. I chose this topic because in many ways surveillance is becoming ubiquitous. It is also the kind of technology that I see as potentially the most dangerous because it is slow and incremental and it grows through convergence.

Technological convergence is the idea that disparate technologies sometimes merge with, amplify and/or enfold other technologies. An example often cited is the smartphone. At one time its sole purpose was to make phone calls. Meanwhile other technologies such as calculators, cameras, GPS devices, and video players were each separate devices. Gradually, over time, these separate technologies (and many more) converged into a single hand-held device, the smartphone. Today we have a smartphone that would blow the doors off of a laptop from 15 years ago. The downside to technological convergence (TC) is that these changes can be very disruptive to markets. If you were in the business of GPS devices a few years ago you know what this means.

TC makes change much more rapid and more disorderly. Change becomes unpredictable.

The same concept can be applied to other technological advancements. Biotech could merge capabilities with nanotechnology. Robotics could incorporate artificial intelligence. Nanotech for example could enable many of the technologies formerly in our devices to be implanted into our bodies.

Google’s Chief of Tech and noted futurist Ray Kurzweil is a someone I follow. Not just because he’s brilliant, nor because I agree with his aspirations for future tech, but because he’s often right with his predictions; like 80% of the time. According to Peter Diamandis for singularityhub.com,

“’In the 2030s,” said Ray, ”we are going to send nano-robots into the brain (via capillaries) that will provide full-immersion virtual reality from within the nervous system and will connect our neocortex to the cloud. Just like how we can wirelessly expand the power of our smartphones 10,000-fold in the cloud today, we’ll be able to expand our neocortex in the cloud.”

I’ll let you chew on that for a few sentences while I throw out another concept. Along with all of these “technologies” that seem to be striving for the betterment of humankind, there are more than a few disruptive technologies that are advancing equally as fast. We could toss surveillance, hacking, and terrorism into that pot. There is no reason why these efforts cannot be advanced and converged at an equally alarming and potentially unpredictable rate. You can do the math.

Should that keep us from moving forward? Probably not. But at the same time, maybe we should start thinking about the future as something that could happen instead of something impossible?  

More to think about on a Friday afternoon.

Bookmark and Share

Design fiction. Think now.

This week I gave my annual lecture to Foundations students on design fiction. The Foundations Program at The Ohio State University Department of Design is comprised primarily (though not entirely) of incoming freshmen aspiring to get into the program at the end of their first year. Out of roughly 90 hopefuls, as many as 60 could be selected.

 
Design fiction is something of an advanced topic for first-year students. It is a form of design research that goes beyond conventional forms of research and stretches into the theoretical. The stuff it yields (like all research) is knowledge, which should not be confused with the answer or the solution to a problem, rather it becomes one of the tools that designers can use in crafting better futures.

 
Knowledge is critical.
One of the things that I try to stress to students is the enormity of what we don’t know. At the end of their education students will know much more than they do know but there is an iceberg of information out of sight that we can’t even begin to comprehend. This is why research is so critical to design. The theoretical comes in when we try to think about the future, perhaps the thing we know the least about. We can examine the tangible present and the recorded past, but the future is a trajectory that is affected by an enormous number of variables outside our control. We like to think that we can predict it, but rarely are we on the mark. So design fiction is a way of visualizing the future along with its resident artifacts, and bring it into the present where we can examine it and ask ourselves if this is a future we want.

 
It is a different track. I recently attended the First International Conference on Anticipation. Anticipation is a completely new field of study. According to its founder Roberto Poli,

“An anticipatory behavior is a behavior that ‘uses’ the future in its actual decisional process. It is the process of using the future in the present, which includes a forward-looking stance and the use of that forwardlooking stance to effect a change in the present. Anticipation therefore includes two mandatory components: a forward-looking attitude and the use of the former’s result for action.”

For me, this highlights some key similarities in design fiction and anticipation. At one level, all futures are fictions. Using a future design— design that does not yet exist—to help us make decisions today is an appropriate a methodology for this new field. Concomitantly, designers need a sense of anticipation as they create new products, communications, places, experiences, organizations and systems.

 
The reality of technological convergence makes the future an unstable concept. The merging of cognitive science, genetics, nanotech, biotech, infotech, robotics, and artificial intelligence is like shuffling a dozen decks of cards. The combinations become mind-boggling. So while it may seem a bit advanced for first-year design students, from my perspective we cannot start soon enough to think about our profession as a crucial player in crafting what the future will look like. Design fiction—drawing from the future—will be an increasingly important tool.

Bookmark and Share