Defining [my] design fiction.

 

It’s tough to define something that still so new that, in practice, there is no prescribed method and dozens of interpretations. I met some designers at a recent conference in Trento, Italy that insist they invented the term in 1995, but most authorities attribute the origin to Bruce Sterling in his 2005 book, Shaping Things. The book was not about design fiction per se. Sterling’s is fond of creating neologisms, and this was one of those (like the term ‘spime’) that appeared in that book. It caught on. Sometime later Sterling sought to clarify it. And his most quoted definition is, “The deliberate use of diegetic prototypes to suspend disbelief about change.” If you rattle that off to most people, they look at you glassy-eyed. Fortunately, in 2013, Sterling went into more detail.

“Deliberate use’ means that design fiction is something that people do with a purpose. ‘Diegetic’ is from film and theatre studies. A movie has a story, but it also has all the commentary, scene-setting, props, sets and gizmos to support that story. Design fiction doesn’t tell stories — instead, it designs prototypes that imply a changed world. Suspending disbelief’ means that design fiction has an ethics. Design fictions are fakes of a theatrical sort, but they’re not wicked frauds or hoaxes intended to rob or fool people. A design fiction is a creative act that puts the viewer into a different conceptual space — for a while. Then it lets him go. Design fiction has an audience, not victims. Finally, there’s the part about ‘change’. Awareness of change is what distinguishes design fictions from jokes about technology, such as over-complex Heath Robinson machines or Japanese chindogu (‘weird tool’) objects. Design fiction attacks the status quo and suggests clear ways in which life might become different.” (Sterling, 2013)

The above definition is the one on which I base most of my research. I’ve written on this before, such as what distinguishes it from science fiction, but I bring this up today because I frequently run into things that are not design fiction but are labeled thus. There are three non-negotiables for me. We’re talking about change, a critical eye on change and suspending disbelief.

Change
Part of the intent of design fiction is to get you to think about change. Things are going to change. It implies a future. I suppose it doesn’t mean that the fiction itself has to take place in the future, however, since we can’t go back in time, the only kind of change we’re going to encounter is the future variety. So, if the intent is to make us think, that thinking should have some redeeming benefit on the present to make us better prepared for the future. Such as, “Wow. That future sucks. Let’s not let that happen.” Or, “Careful with that future scenario, it could easily go awry.” Like that.

A critical eye on change.
There are probably a lot of practitioners who would disagree with me on this point. The human race has a proclivity for messing things up. We develop things often in advance of actually thinking about what they might mean for society, or an economy, or our health, our environment, or our behavior. We design way too much stuff just because we can and because it might make us rich if we do. We need to think more before we act. It means we need to take some responsibility for what we design. Looking into the future with a critical eye on how things could go wrong or just on how wrong they might be without us noticing is a crucial element in my interpretation of intent.

Suspending disbelief
As Sterling says, the objective here is not to fool you but to get close enough to a realistic scenario that you accept that it could happen. If it’s off-the-wall, WTF, conceptual art, absent of any plausible existence, or sheer fantasy, it misses the point. I’m sure there’s a place for those and no doubt a purpose, but call it something else, but not design fiction. It’s the same reason that Star Wars is not design fiction. There’s design and there’s fiction but different intent.

I didn’t intend to have this turn into a rant, and this may all seem to you like splitting hairs, but often these subtle differences are important so that we know what were studying and why.

The nice thing about blogs is that if you have a different opinion, you can share.

 

Sterling, B., 2013. Design Fiction: “Patently Untrue” by Bruce Sterling [WWW Document]. WIRED. URL http://www.wired.co.uk/magazine/archive/2013/10/play/patently-untrue (accessed 12.12.14).
Bookmark and Share

Privacy or paranoia?

 

If you’ve been a follower of this blog for a while, then you know that I am something of a privacy wonk. I’ve written about it before (about a dozen times, such as), and I’ve even built a research project (that you can enact yourself) around it. A couple of things transpired this week to remind me that privacy is tenuous. (It could also be all the back episodes of Person of Interest that I’ve been watching lately or David Staley’s post last April about the Future of Privacy.) First, I received an email from a friend this week alerting me to a little presumption that my software is spying on me. I’m old enough to remember when you purchased software on as a set of CDs (or even disks). You loaded in on your computer, and it seemed to last for years before you needed to upgrade. Let’s face it, most of us use only a small subset of the features in our favorite applications. I remember using Photoshop 5 for quite awhile before upgrading and the same with the rest of what is now called the Adobe Creative Suite. I still use the primary features of Photoshop 5, Illustrator 10 and InDesign (ver. whatever), 90% of the time. In my opinion, the add-ons to those apps have just slowed things down, and of course, the expense has skyrocketed. Gone are the days when you could upgrade your software every couple of years. Now you have to subscribe at a clip of about $300 a year for the Adobe Creative Suite. Apparently, the old business model was not profitable enough. But then came the Adobe Creative Cloud. (Sound of an angelic chorus.) Now it takes my laptop about 8 minutes to load into the cloud and boot up my software. Plus, it stores stuff. I don’t need it to store stuff for me. I have backup drives and archive software to do that for me.

Back to the privacy discussion. My friend’s email alerted me to this little tidbit hidden inside the Creative Cloud Account Manager.

Learn elsewhere, please.

Learn elsewhere, please.

Under the Security and Privacy tab, there are a couple of options. The first is Desktop App Usage. Here, you can turn this on or off. If it’s on, one of the things it tracks is,

“Adobe feature usage information, such as menu options or buttons selected.”

That means it tracks your keystrokes. Possibly this only occurs when you are using that particular app, but uh-uh, no thanks. Switch that off. Next up is a more pernicious option; it’s called, Machine Learning. Hmm. We all know what that is and I’ver written about that before, too. Just do a search. Here, Adobe says,

“Adobe uses machine learning technologies, such as content analysis and pattern recognition, to improve our products and services. If you prefer that Adobe not analyze your files to improve our products and services, you can opt-out of machine learning at any time.”

Hey, Adobe, if you want to know how to improve your products and services, how about you ask me, or better yet, pay me to consult. A deeper dive into ‘machine learning’ tells me more. Here are a couple of quotes:

“Adobe uses machine learning technologies… For example, features such as Content-Aware Fill in Photoshop and facial recognition in Lightroom could be refined using machine learning.”

“For example, we may use pattern recognition on your photographs to identify all images of dogs and auto-tag them for you. If you select one of those photographs and indicate that it does not include a dog, we use that information to get better at identifying images of dogs.”

Facial recognition? Nope. Help me find dog pictures? Thanks, but I think I can find them myself.

I know how this works. The more data that the machine can feed on the better it becomes at learning. I would just rather Adobe get their data by searching it out for it themselves. I’m sure they’ll be okay. (Afterall there’s a few million people who never look at their account settings.) Also, keep in mind, it’s their machine not mine.

The last item on my privacy rant just validated my paranoia. I ran across this picture of Facebook billionaire Mark Zuckerberg hamming it up for his FB page.

zuckerberg copy

 

In the background, is his personal laptop. Upon closer inspection, we see that Zuck has a piece of duct tape covering his laptop cam and his dual microphones on the side. He knows.

zuckcloseup

 

Go get a piece of tape.

Bookmark and Share

News from a speculative future.

I  thought I’d take a creative leap today with a bit of future fiction. Here is a short, speculative future news article based on a plausible trajectory of science, tech and social policy.

 
18 November 2018
The Senate passed a bill today that gives the green light to Omzin Corp. and the nation’s largest insurer, the federal government, to require digital monitoring of health care recipients. Omzin Corp. is the manufacturer of a microscopic additive that is slated to become part of all prescription drugs. When ingested, the additive reacts with the stomach acids to transmit a signal to the insurer verifying that patients have taken their medication. A spokesperson for the federal government’s Universal Enrollment Plan (UEP), said that the Omzin additive was approved by the FDA as completely safe and would be a significant benefit to patients who have difficulty keeping track of complicated lists of medications and prescription requirements. The patient then naturally eliminates the tiny additive. A new ACAapp allows health care recipients to track on their mobile phone which drugs they have taken and when. The information is also transmitted to the health care provider. Opponents of the bill state that the legislation is a violation of privacy and tantamount to forced medication of the population. “Some people can avoid drugs through improved diet and exercise, but this bill pre-empts patients who would like to pursue that option,” according to Hub Garner a representative from citizens group MedChoice. “Patients who opt-in to the program will pay less for their insurance than those who defy the order,” said Senate minority leader

Some definite possibilities here for future artifacts, i.e., design fictions. Remember, design fictions are provocations of possible futures for the purposes of discussion and debate. Plausible or outrageous? What do you think?

Bookmark and Share

Step inside The Lightstream Chronicles

Some time ago I promised to step inside one of the scenes from The Lighstream Chronicles. Today, to commemorate the debut of Season 5—that goes live today—I’m going to deliver on that promise, partially.

 

Background

The notion started after giving my students a tour of the Advanced Computing Center for Arts and Design (ACCAD)s motion-capture lab. We were a discussing VR, and sadly, despite all the recent hype, very few of us—including me—had never experienced state-of-the-art Virtual Reality. In that tour, it occurred to me that through the past five years of continuous work on my graphic novel, a story built entirely in CG, I have a trove of scenes and scenarios that I could in effect step into. Of course, it is not that simple, as I have discovered this summer working with ACCADs animation specialist Vita Berezina-Blackburn. It turns out that my extreme high-resolution images are not ideally compatible with the Oculus pipeline.

The idea was, at first, a curiosity for me, but it became quickly apparent that there was another level of synergy with my work in guerrilla futures, a flavor of design fiction.

Design fiction, my focus of study, centers on the idea that, through prototypes and future narratives we can engage people in thinking about possible futures, discuss and debate them and instill the idea of individual agency in shaping them. Unfortunately, too much design fiction ends up in the theoretical realm within the confines of the art gallery, academic conferences or workshops. The instances are few where the general public receives a future experience to contemplate and consider. Indeed, it has been something of a lament for me that my work in future fiction through the graphic novel, can be experienced as pure entertainment without acknowledging the deeper issues of its socio-techno themes. At the core of experiential design fiction introduced by Stewart Candy (2010) is the notion that future fiction can be inserted into everyday life whether the recipient has asked for them or not. The technique is one method of making the future real enough for us to ask whether this is the future we want and if not what might we do about it now.

Through my recent meanderings with VR, I see that this idea of immersive futures could be an incredibly powerful method of infusing these experiences.

The scene from Season 1 that I selected for this test.

The scene from Season 1 that I selected for this test.

 

About the video
This video is a test. We had no idea what we would get after I stripped down a scene from Season 1. Then we had a couple of weeks of trial and error re-making my files to be compatible with the system. Since one of the things that separate The Lightstream Chronicles from your average graphic novel/webcomic is the fact that you can zoom in 5x to inspect every detail, it is not uncommon, for example for me to have more than two hundred 4K textures in any given scene. It also allows me as the “director” to change it up and dolly in or out to focus on a character or object within a scene without a resulting loss in resolution. To me, it’s one of the drawbacks in many video games of getting in and inspecting a resident artifact. They usually start to “break up” into pixels the closer you get. However, in a real-time environment, you have to make concessions, at least for now, to make your textures render faster.

For this test, we didn’t apply all two hundred textures, just some essentials. For example the cordial glasses, the liquid in the bottle and the array of floating transparent files that hover over Techman’s desk. We did apply the key texture that defines the environment and that is the rusty, perforated metal wall that encloses Techman’s “safe-room” and protects it from eavesdropping. There are lots of other little glitches beyond unassigned textures, such as intersecting polygons and dozens of lighting tweaks that make this far from prime time.

In the average VR game, you move your controller forward through space while you are either seated or standing. Either way, in most cases you are stationary. What distinguishes this from most VR experiences is that I can physically walk through the scene.In this test, we were in the ACCAD motion capture lab.

Wearing the Oculus in the MoCap lab.

Wearing the Oculus in the MoCap lab while Lakshika manages the tether.

I’m sure you have seen pictures of this sort of thing before where characters strap on sensors to “capture their motions” and translate them to virtual CG characters. This was the space in which I was working. It has boundaries, however. So I had to obtain those boundaries, in scale to my scene so that I could be sure that the room and the characters were within the area of the lab. Dozens of tracking devices around the lab read sensors on the Oculus headset and ensure that once I strap it on, I can move freely within the limits of virtual space, and it would relate my movements to the context of the virtual scene.

Next week I’ll be going back into the lab with a new scene and take a look at Kristin Broulliard and Keiji in their exchange from episode 97 (page) Season 3.

Next time.

Next time.

Respond, reply, comment. Enjoy.

 

Bookmark and Share

Who are you?

 

There have been a few articles in the recent datasphere have centered around the pervasive tracking of our online activity from the benign to those bordering on unethical. One was from FastCompany that highlighted some practices that web marketers use to track the folks that visit their sites. The article by Steve Melendez lists a handful of these. They include the basics like first party cookies, and A/B testing, to more invasive methods such as psychological testing (thanks, Facebook) third-party tracking cookies, and differential pricing. The cookie is, of course, the most basic. I use them on this site and on The Lightstream Chronicles to see if anyone is visiting, where they’re coming from and a bunch of other minutiae. Using Google Analytics, I can, for example, see what city or country my readers are coming from, age and sex, whether they are regulars or new visitors, whether they visit via mobile or desktop, Apple or Windows, and if they came to my site by way of referral, where did they originate. Then I know if my ads for the graphic novel are working. I find this harmless. I have no interest in knowing your sexual preference, where you shop, and above all, I’m not selling anything (at least not yet). I’m just looking for more eyeballs. More viewers mean that I’m not wasting my time and that somebody is paying attention. It’s interesting that a couple of months ago the EU internet authorities sent me a snippet of code that I was “required” to post on the LSC site alerting my visitors that I use cookies. Aside from they U.S., my highest viewership is from the UK. It’s interesting that they are aware that their citizens are visiting. Hmm.

I have software that allows me to A/B test which means I could change up something on the graphic novel homepage and see if it gets more reaction than a previous version. But, I barely have the time to publish a new blog or episode much less create different versions and test them. A one-man-show has its limitations.

The rest of the tracking methods highlighted in the above article require a lot of devious programming. Since I have my hands full with the basics, this stuff is way above my pay grade. Even if it wasn’t, I think it all goes a bit too far.

Personally, I deplore most internet advertising. I know that makes me a hypocrite since I use it from time to time to drive traffic to my site. I also realize that it is probably a necessary evil. Sites need revenue, or they can’t pump out the content on which we have come to rely. Unfortunately, the landscape often turns into a melee. Tumblr is a good example. Initially, they integrated their ads into the format of their posts. So as you are scrolling through the content, you see an ad within their signature brand presentation. Cool. Then they started doing separate in-line ads. These looked entirely different from their brand content, and the ads were those annoying things like “Grandma discovers the fountain of youth.” Not cool. Then they introduced this floating ad box that tracks you all the way down the page as you scroll through content. You get no break from it. It’s distracting, and based on the content, it can be horrifying, like Hillary Clinton staring at you for seven minutes. How much can a person take?

And it won't go away.

And it won’t go away.

Since my blog is future oriented, the question arises, what does this have to do with the future? It does. These marketing techniques will only become more sophisticated. Many of them already incorporate artificial intelligence to map your activity and predict your every want and need—maybe even the ones you didn’t think anyone knew you had. Is this an invasion of privacy? If it is, it’s going to get more invasive. And as I’m fond of saying, we need to pay attention to these technologies and practices, now or we won’t have a say in where they end up. As a society, we have to do better than just adapt to whatever comes along. We need to help point them in the right direction from the beginning.

 

Bookmark and Share

Thought leaders and followers.

 

Next week, the World Future Society is having its annual conference. As a member, I really should be going, but I can’t make it this year. The future is a dicey place. There are people convinced that we can create a utopia, some are warning of dystopia, and the rest are settled somewhere in between. Based on promotional emails that I have received, one of the topics is “The Future of Evolution and Human Nature.” According to the promo,

“The mixed emotions and cognitive dissonance that occur inside each of us also scale upward into our social fabric: implicit bias against new perspectives, disdain for people who represent “other”, the fear of a new world that is not the same as it has always been, and the hopelessness that we cannot solve our problems. We know from experience that this negativity, hatred, fear, and hopelessness is not what it seems like on the surface: it is a reaction to change. And indeed we are experiencing a period of profound change.” There is a larger story of our evolution that extends well beyond the negativity and despair that feels so real to us today. It’s a story of redefining and building infrastructure around trust, hope and empathy. It’s a story of accelerating human imagination and leveraging it to create new and wondrous things.

It is a story of technological magic that will free us from scarcity and ensure a prosperous lifestyle for everyone, regardless of where they come from.”

Woah. I have to admit, this kind of talk that makes me uncomfortable. Are fear of a new world, negativity, hatred, and fear reactions to change? Will technosocial magic solve all our problems? This type of rhetoric sounds more like a movement than a conference that examines differing views on an important topic. It would seem to frame caution as fear and negativity, and then we throw in that hyperbole hatred. Does it sound like the beginning of an agenda with a framework that characterizes those who disagree as haters? I think it does. It’s a popular tactic.

These views do not by any means reflect the opinions of the entire WFS membership, but there is a significant contingent, such as the folks from Humanity+, which hold the belief that we can fix human evolution—even human nature—with technology. For me, this is treading into thorny territory.

What is human nature? Merriam-Webster online provides this definition:

“[…]the nature of humans; especially: the fundamental dispositions and traits of humans.” Presumably, we include good traits and bad traits. Will our discussions center on which features to fix and which to keep or enhance? Who will decide?

What about the human condition? Can we change this? Should we? According to Wikipedia,

“The human condition is “the characteristics, key events, and situations which compose the essentials of human existence, such as birth, growth, emotionality, aspiration, conflict, and mortality.” This is a very broad topic which has been and continues to be pondered and analyzed from many perspectives, including those of religion, philosophy, history, art, literature, anthropology, sociology, psychology, and biology.”

Clearly, there are a lot of different perspectives to be represented here. Do we honestly believe that technology will answer them all sufficiently? The theme of the upcoming WFS conference is “A Brighter Future IS Possible.” No doubt there will be a flurry of technosocial proposals presented there, and we should not put them aside as a bunch of fringe futurists. These voices are thought-leaders. They lead thinking. Are we thinking? Are we paying attention? If so, then it’s time to discuss and debate these issues, or others will decides without us.

Bookmark and Share

Future Shock

 

As you no doubt have heard, Alvin Toffler died on June 27, 2016, at the age of 87. Mr. Toffler was a futurist. The book for which he is best known, Future Shock was a best seller in 1970 and was considered required college reading at the time. In essence, Mr. Toffler said that the future would be a disorienting place if we just let it happen. He said we need to pay attention.

Credit: Susan Wood/Getty Images from The New York Times 2016

Credit: Susan Wood/Getty Images from The New York Times 2016

This week, The New York Times published an article entitled Why We Need to Pick Up Alvin Toffler’s Torch by Farhad Manjoo. As Manjoo observes, at one time (the 1950s, 1960s, and 1970s), the study of foresight and forecasting was important stuff that governments and corporations took seriously. Though I’m not sure I agree with Manjoo’s assessment of why that is no longer the case, I do agree that it is no longer the case.

“In many large ways, it’s almost as if we have collectively stopped planning for the future. Instead, we all just sort of bounce along in the present, caught in the headlights of a tomorrow pushed by a few large corporations and shaped by the inescapable logic of hyper-efficiency — a future heading straight for us. It’s not just future shock; we now have future blindness.”

At one time, this was required reading.

At one time, this was required reading.

When I attended the First International Conference on Anticipation in 2015, I was pleased to discover that the blindness was not everywhere. In fact, many of the people deeply rooted in the latest innovations in science and technology, architecture, social science, medicine, and a hundred other fields are very interested in the future. They see an urgency. But most governments don’t and I fear that most corporations, even the tech giants are more interested in being first with the next zillion-dollar technology than asking if that technology is the right thing to do. Even less they are asking what repercussions might flow from these advancements and what are the ramifications of today’s decision making. We just don’t think that way.

I don’t believe that has to be the case. The World Future Society for example at their upcoming conference in Washington, DC will be addressing the idea of futures studies as a requirement for high school education. They ask,

“Isn’t it surprising that mainstream education offers so little teaching on foresight? Were you exposed to futures thinking when you were in high school or college? Are your children or grandchildren taught how decisions can be made using scenario planning, for example? Or take part in discussions about what alternative futures might look like? In a complex, uncertain world, what more might higher education do to promote a Futurist Mindset?”

It certainly needs to be part of design education, and it is one of the things I vigorously promote at my university.

As Manjoo sums up in his NYT article,

“Of course, the future doesn’t stop coming just because you stop planning for it. Technological change has only sped up since the 1990s. Notwithstanding questions about its impact on the economy, there seems no debate that advances in hardware, software and biomedicine have led to seismic changes in how most of the world lives and works — and will continue to do so.

Yet, without soliciting advice from a class of professionals charged with thinking systematically about the future, we risk rushing into tomorrow headlong, without a plan.”

And if that isn’t just crazy, at the very least it’s dangerous.

 

 

Bookmark and Share

Vision comes from looking to the future.

 

I was away last week, but I left off with a post about proving that some of the things that we current think of as sci-fi or fantasy are not only plausible, but some may even be on their way to reality. In the last post, I was providing the logical succession toward implantable technology or biohacking.

The latest is a robot toy from a company called Anki. Once again, WIRED provided the background on this product, and it is an excellent example of technological convergence which I have discussed many times before. Essentially, “technovergence” is when multiple cutting-edge technologies come together in unexpected and sometimes unpredictable ways. In this case, the toy brings together AI, machine learning, computer vision science, robotics, deep character development, facial recognition, and a few more. According to the video below,

“There have been very few applications where a robot has felt like a character that connects with humans around it. For that, you really need artificial intelligence and robotics. That’s been the missing key.”

According to David Pierce, with WIRED,

“Cozmo is a cheeky gamer; the little scamp tried to fake me into tapping my block when they didn’t match, and stormed off when I won. And it’s those little tics, the banging of its lift-like arm and spinning in circles and squawking in its Wall-E voice, that really makes you want to refer to the little guy as ‘he’ rather than ‘it.’”

What strikes me as especially interesting is that my students designed their own version of this last semester. (I’m pretty sure that they knew nothing about this particular toy.) The semester was a rigorous design fiction class that took a hard look at what was possible in the next five to ten years. For some, the class was something like hell, but the similarities and possibilities that my students put together for their robot are amazingly like Cozmo.

I think this is proof of more than what is possible; it’s evidence that vision comes from looking to the future.

Bookmark and Share

Future proof.

 

There is no such thing as future proof anything, of course, so I use the term to refer to evidence that a current idea is becoming more and more probable of something we will see in the future. The evidence I am talking about surfaced in a FastCo article this week about biohacking and the new frontier of digital implants. Biohacking has a loose definition and can reference using genetic material without regard to ethical procedures, to DIY biology, to pseudo-bioluminescent tattoos, to body modification for functional enhancement—see transhumanism. Last year, my students investigated this and determined that a society willing to accept internal implants was not a near-future scenario. Nevertheless, according to FastCo author Steven Melendez,

“a survey released by Visa last year that found that 25% of Australians are ‘at least slightly interested’ in paying for purchases through a chip implanted in their bodies.”

Melendez goes on to describe a wide variety of implants already in use for medical, artistic and personal efficiency and interviews Tim Shank, president of a futurist group called TwinCities+. Shank says,

“[For] people with Android phones, I can just tap their phone with my hand, right over the chip, and it will send that information to their phone..”

implants

Amal Graafstra’s Hands [Photo: courtesy of Amal Graafstra] c/o WIRED

The popularity of body piercings and tattoos— also once considered as invasive procedures—has skyrocketed. Implantable technology, especially as it becomes more functionally relevant could follow a similar curve.

I saw this coming some years ago when writing The Lightstream Chronicles. The story, as many of you know, takes place in the far future where implantable technology is mundane and part of everyday life. People regulate their body chemistry access the Lightstream (the evolved Internet) and make “calls” using their fingertips embedded with Luminous Implants. These future implants talk directly to implants in the brain, and other systemic body centers to make adjustments or provide information.

An ad for Luminous Implants, and the "tap" numbers for local attractions.

An ad for Luminous Implants, and the “tap” numbers for local attractions.

Bookmark and Share

When the stakes are low, mistakes are beneficial. In more weighty pursuits, not so much.

 

I’m from the old school. I suppose, that sentence alone makes me seem like a codger. Let’s call it the eighties. Part of the art of problem solving was to work toward a solution and get it as tight as we possibly could before we committed to implementation. It was called the design process and today it’s called “design thinking.” So it was heresy to me when I found myself, some years ago now, in a high-tech corporation where this was a doctrine ignored. I recall a top-secret, new product meeting in which the owner and chief technology officer said, “We’re going to make some mistakes on this, so let’s hurry up and make them.” He was not speaking about iterative design, which is part and parcel of the design process, he was talking about going to market with the product and letting the users illuminate what we should fix. Of course, the product was safe and met all the legal standards, but it was far from polished. The idea was that mass consumer trial-by-fire would provide us with an exponentially higher data return than if we tested all the possible permutations in a lab at headquarters. He was, apparently, ahead of his time.

In a recent FastCo article on Facebook’s race to be the leader in AI, author Daniel Terdiman cites some of Mark Zuckerberg’s mantras: “‘Move fast and break things,’ or ‘Done is better than perfect.’” We can debate this philosophically or maybe even ethically, but it is clearly today’s standard procedure for new technologies, new science and the incessant race to be first. Here is a quote from that article:

“Artificial intelligence has become a vital part of scaling Facebook. It’s already being used to recognize the faces of your friends in photographs, and curate your newsfeed. DeepText, an engine for reading text that was unveiled last week, can understand “with near-human accuracy” the content in thousands of posts per second, in more than 20 different languages. Soon, the text will be translated into a dozen different languages, automatically. Facebook is working on recognizing your voice and identifying people inside of videos so that you can fast forward to the moment when your friend walks into view.”

The story goes on to say that Facebook, though it is pouring tons of money into AI, is behind the curve, having begun only three or so years ago. Aside from the fact that FB’s accomplishments seem fairly impressive (at least to me), people like Google and Microsoft are apparently way ahead. In the case of Microsoft, the effort began more than twenty years ago.

Today, the hurry up is accelerated by open sourcingWikipedia explains the benefits of open sourcing as:

“The open-source model, or collaborative development from multiple independent sources, generates an increasingly more diverse scope of design perspective than any one company is capable of developing and sustaining long term.”

The idea behind open sourcing is that the mistakes will happen even faster along with the advancements. It is becoming the de facto approach to breakthrough technologies. If fast is the primary, maybe even the only goal, it is a smart strategy. Or is it a touch short sighted? As we know, not everyone who can play with the code that a company has given them has that company’s best interests in mind. As for the best interests of society, I’m not sure those are even on the list.

To examine our motivations and the ripples that emanate from them, of course, is my mission with design fiction and speculative futures. Whether we like it or not, a by-product of technological development—aside from utopia—is human behavior. There are repercussions from the things we make and the systems that evolve from them. When your mantra is “Move fast and break things,” that’s what you’ll get. But there is certainly no time the move-fast loop to consider the repercussions of your actions, or the unexpected consequences. Consequences will appear all by themselves.

The technologists tell us that when we reach the holy grail of AI (whatever that is), we will be better people and solve the world’s most challenging problems. But in reality, it’s not that simple. With the nuances of AI, there are potential problems, or mistakes, that could be difficult to fix; new predicaments that humans might not be able to solve and AI may not be inclined to resolve on our behalf.

In the rush to make mistakes, how grave will they be? And, who is responsible?

Bookmark and Share
Return top

About the Envisionist

Scott Denison is an accomplished visual, brand, interior, and set designer. He is currently Assistant Professor and Foundations Coordinator for the Department of Design at The Ohio State University. He continues his research in design fiction that examines the design-culture relationship within future narratives and interventions. You can read his online graphic novel in weekly updates at http://thelightstreamchronicles.com. This blog contains commentary on all things future and often includes artist commentary on comic pages. You can find the author's professional portfolio at http://scottdenison(dot)com