It’s all happening too fast.

 

Since design fiction is my area of research and focus, I have covered the difference between it and science fiction in previous blogs. But the two are quite closely related. Let me start with science fiction. There are a plethora of definitions for SF. Here are two of my favorites.

The first is from Isaac Asimov:

“[Social] science fiction is that branch of literature which is concerned with the impact of scientific advance on human beings.” — Isaac Asimov, Science Fiction Writers of America Bulletin, 1951 1

The second is from Robert Heinlein:

“…realistic speculation about possible future events, based solidly on adequate knowledge of the real world, past and present, and on a thorough understanding of the scientific method.” 2

I especially like the first because it emphasizes people at the heart of the storytelling. The second definition speaks to real-world knowledge, and understanding of the scientific method. Here, there is a clear distinction between science fiction and fantasy. Star Wars is not science fiction. Even George Lucas admits this. In a conversation at the Sundance Film Festival last year he is quoted as saying, “Star Wars really isn’t a science-fiction film, it’s a fantasy film and a space opera.”3 While Star Wars involves space travel (which is technically science based), the story has no connection to the real world; it may as well be Lord of the Rings.

I bring up these distinctions because design fiction is a hybrid of science fiction, but there is a difference. Sterling defines design fiction as, “The deliberate use of diegetic prototypes to suspend disbelief about change.” Though even Sterling agrees that his definition is “heavy-laden” the operative word in his definition is “deliberate.” In other words, a primary operand of design fiction is the designers intent. There is a purpose for design fiction and it is to provoke discussion about the future. While it may entertain, that is not it’s purpose. It needs to be a provocation. For me, the more provocative, the better. The idea that we would go quietly into whatever future unfolds based upon whatever corporate or scientific manifesto is most profitable or most manageable makes me crazy.

The urgency arises in the fact that the future is moving way to fast. In The Lightstream Chronicles, some of the developments that I reserved for 25, 50 or even further into the future are showing signs of life in the next two or three years. Next week I will introduce you to a couple of these technologies.

 

1. http://io9.com/5622186/how-many-defintions-of-science-fiction-are-there
2. Heinlein, R., 1983. The SF book of lists. In: Jakubowski, M., Edwards, M. (Eds.), The SF Book of Lists. Berkley Books, New York, p. 257.
3. http://www.esquire.com/entertainment/movies/a32507/george-lucas-sundance-quotes/
Bookmark and Share

The end of code.

 

This week WIRED Magazine released their June issue announcing the end of code. That would mean that the ability to write code, as is so cherished in the job world right now, is on the way out. They attribute this tectonic shift to Artificial Intelligence, machine learning, neural networks and the like. In the future (which is taking place now) we won’t have to write code to tell computers what to do, we will just have to teach them. I have been over this before through a number of previous writings. An example: Facebook uses a form of machine learning by collecting data from millions of pictures that are posted on the social network. When someone loads a group photo and identifies the people in the shot, Facebook’s AI remembers it by logging the prime coordinates on a human face and attributing them to that name (aka facial recognition). If the same coordinates show up again in another post, Facebook identifies it as you. People load the data (on a massive scale), and the machine learns. By naming the person or persons in the photo, you have taught the machine.

The WIRED article makes some interesting connections about the evolution of our thinking concerning the mind, about learning, and how we have taken a circular route in our reasoning. In essence, the mind was once considered a black box; there was no way to figure it out, but you could condition responses, a la Pavlov’s Dog. That logic changes with cognitive science which is the idea that the brain is more like a computer. The computing analogy caught on, and researchers began to see the whole idea of thought, memory, and thinking as stuff you could code, or hack, just like a computer. Indeed, it is this reasoning that has led to the notion that DNA is, in fact, codable, hence splicing through Crispr. If it’s all just code, we can make anything. That was the thinking. Now there is machine learning and neural networks. You still code, but only to set up the structure by which the “thing” learns, but after that, it’s on its own. The result is fractal and not always predictable. You can’t go back in and hack the way it is learning because it has started to generate a private math—and we can’t make sense of it. In other words, it is a black box. We have, in effect, stymied ourselves.

There is an upside. To train a computer you used to have to learn how to code. Now you just teach it by showing or giving it repetitive information, something anyone can do, though, at this point, some do it better than others.

Always the troubleshooter, I wonder what happens when we—mystified at a “conclusion” or decision arrived at by the machine—can’t figure out how to make it stop arriving at that conclusion. You can do the math.

Do we just turn it off?

Bookmark and Share

Adapt or plan? Where do we go from here?

I just returned from Nottingham, UK where I presented a paper for Cumulus 16, In This Place. The paper was entitled Design Fiction: A Countermeasure For Technology Surprise. An Undergraduate Proposal. My argument hinged on the idea that students needed to start thinking about our technosocial future. Design fiction is my area of research, but if you were inclined to do so, you could probably choose a variant methodology to provoke discussion and debate about the future of design, what designers do, and their responsibility as creators of culture. In January, I had the opportunity to take an initial pass at such a class. The experiment was a different twist on a collaborative studio where students from the three traditional design specialties worked together on a defined problem. The emphasis was on collaboration rather than the outcome. Some students embraced this while others pushed back. The push-back came from students fixated on building a portfolio of “things” or “spaces” or “visual communications“ so that they could impress prospective employers. I can’t blame them for that. As educators, we have hammered the old paradigm of getting a job at Apple or Google, or (fill in the blank) as the ultimate goal of undergraduate education. But the paradigm is changing and the model of a designer as the maker of “stuff” is wearing thin.

A great little polemic from Cameron Tonkinwise recently appeared that helped to articulate this issue. He points the finger at interaction design scholars and asks why they are not writing about or critiquing “the current developments in the world of tech.” He wonders whether anyone is paying attention. As designers and computer scientists we are feeding a pipeline of more apps with minimal viability, with seemingly no regard for the consequences on social systems, and (one of my personal favorites) the behaviors we engender through our designs.

I tell my students that it is important to think about the future. The usual response is, “We do!” When I drill deeper, I find that their thoughts revolve around getting a job, making a living, finding a home, and a partner. They rarely include global warming, economic upheavals, feeding the world, natural disasters, etc. Why? These issues they view as beyond their control. We do not choose these things; they happen to us. Nevertheless, these are precisely the predicaments that need designers. I would argue these concerns are far more important than another app to count my calories or select the location for my next sandwich.

There is a host of others like Tonkinwise that see that design needs to refocus, but often it seems like there are are a greater number that blindly plod forward unaware of the futures they are creating. I’m not talking about refocusing designers to be better at business or programming languages; I’m talking about making designers more responsible for what they design. And like Tonkinwise, I agree that it needs to start with design educators.

Bookmark and Share