Tag Archives: DNA

Corporate Sci-Fi.

Note: Also published on LinkedIn


Why your company needs to play in the future.

As a professor of design and a design fiction researcher, I write academic papers and blog weekly about the future. I teach about the future of design, and I create future scenarios, sometimes with my students, that provoke us to look at what we are doing, what we are making, why we are making it and the ramifications that are inevitable. Primarily I try to focus both designers and decision makers on the steps they can take today to keep from being blindsided tomorrow. Futurists seem to be all the rage these days telling us to prepare for the Singularity, autonomous everything, or that robots will take our jobs. Recently, Jennifer Doudna, co-inventor of the gene editing technique called CrisprCas9 has been making the rounds and sounding the alarm that technology is moving so fast that we aren’t going to be able to contain a host of unforeseen (and foreseen) circumstances inside Pandora’s box. This concern should be prevalent, however, beyond just the bioengineering fields and extend into virtually anywhere that technology is racing forward fueled by venture capital and the desperate need to stay on top of whatever space in which we are playing. There is a lot at stake. Technology has already redefined privacy, behavioral wellness, personal autonomy, healthcare, labor, and maybe even our humanness, just to name a few.

Several recent articles have highlighted the changing world of design and how the pressure is on designers to make user adoption more like user addiction to ensure the success of a product or app. The world of behavioral economics is becoming a new arena in which we are using algorithms to manipulate users. Some designers are passing the buck to the clients or corporations that employ them for the questionable ethics of addictive products; others feel compelled to step aside and work on less lucrative projects or apply their skills to social causes. Most really care and want to help. But designers are uniquely positioned and trained to tackle these wicked problems—if we would collaborate with them.

Beyond the companies that might be deliberately trying to manipulate us, are those that unknowingly, or at least unintentionally, transform our behaviors in ways that are potentially harmful. Traditionally, we seek to hold someone responsible when a product or service is faulty, the physician for malpractice, the designer or manufacturer when a toy causes injury, a garment falls apart, or an appliance self-destructs. But as we move toward systemic designs that are less physical and more emotional, behavioral, or biological, design faults may not be so easy to identify and their repercussions noticeable only after serious issues have arisen. In fact, we launch many of the apps and operating systems used today with admitted errors and bugs. Designers rely on real-life testing to identify problems, issue patches, revisions, and versions.

In the realm of nanotechnology, while scientists and thought leaders have proposed guidelines and best-practices, research and development teams in labs around the world race forward without regulation creating molecule-sized structures, machines, and substances with no idea whether they are safe or what might be long-term effects of exposure to these elements. In biotechnology, while folks like Jennifer Doudna appeal to a morally ethical cadre of researchers to tread carefully in the realm of genetic engineering (especially when it comes to inheritable gene manipulation) we do not universally share those morals and ethics. Recent headlines attest to the fact that some scientists are bent on moving forward regardless of the implications.

Some technologies such as our smartphones have become equally invasive technology, yet they are now considered mundane. In just ten years since the introduction of the iPhone, we have transformed behaviors, upended our modes of communication, redefined privacy, distracted our attentions, distorted reality and manipulated a predicted 2.3 billion users as of 2017. [1] It is worth contemplating that this disruption is not from a faulty product, but rather one that can only be considered wildly successful.

There are a plethora of additional technologies that are poised to refine our worlds yet again including artificial intelligence, ubiquitous surveillance, human augmentation, robotics, virtual, augmented and mixed reality and the pervasive Internet of Things. Many of these technologies make their way into our experiences through the promise of better living, medical breakthroughs, or a safer and more secure life. But too often we ignore the potential downsides, the unintended consequences, or the systemic ripple-effects that these technologies spawn. Why?

In many cases, we do not want to stand in the way of progress. In others, we believe that the benefits outweigh the disadvantages, yet this is the same thinking that has spawned some of our most complex and daunting systems, from nuclear weapons to air travel and the internal combustion engine. Each of these began with the best of intentions and, in many ways were as successful and initially beneficial as they could be. At the same time, they advanced and proliferated far more rapidly than we were prepared to accommodate. Dirty bombs are a reality we did not expect. The alluring efficiency with which we can fly from one city to another has nevertheless spawned a gnarly network of air traffic, baggage logistics, and anti-terrorism measures that are arguably more elaborate than getting an aircraft off the ground. Traffic, freeways, infrastructure, safety, and the drain on natural resources are complexities never imagined with the revolution of personal transportation. We didn’t see the entailments of success.

This is not always true. There have often been scientists and thought leaders who were waving the yellow flag of caution. I have written about how, “back in 1975, scientists and researchers got together at Asilomar because they saw the handwriting on the wall. They drew up a set of resolutions to make sure that one day the promise of Bioengineering (still a glimmer in their eyes) would not get out of hand.”[2] Indeed, researchers like Jennifer Doudna continue to carry the banner. A similar conference took place earlier this year to alert us to the potential dangers of technology and earlier this year another to put forth recommendations and guidelines to ensure that when machines are smarter than we are they carry on in a beneficent role. Too often, however, it is the scientists and visionaries who attend these conferences. [3] Noticeably absent, though not always, is corporate leadership.

Nevertheless, in this country, there remains no safeguarding regulation for nanotech, nor bioengineering, nor AI research. It is a free-for-all, and all of which could have massive disruption not only to our lifestyles but also our culture, our behavior, and our humanness. Who is responsible?

For nearly 40 years there has been an environmental movement that has spread globally. Good stewardship is a good idea. But it wasn’t until most corporations saw a way for it to make economic sense that they began to focus on it and then promote it as their contribution to society, their responsibility, and their civic duty. As well intentioned as they may be (and many are) much more are not paying attention to the effect of their technological achievements on our human condition.

We design most technologies with a combination of perceived user need and commercial potential. In many cases, these are coupled with more altruistic motivations such as a “do no harm” commitment to the environment and fair labor practices. As we move toward the capability to change ourselves in fundamental ways, are we also giving significant thought to the behaviors that we will engender by such innovations, or the resulting implications for society, culture, and the interconnectedness of everything?

Enter Humane Technology

Ultimately we will have to demand this level of thought, beginning with ourselves. But we should not fight this alone. Corporations concerned with appearing sensitive and proactive toward the environment and social justice need to add a new pillar to their edifice as responsible global citizens: humane technology.

Humane technology considers the socio-behavioral ramifications of products and services: digital dependencies, and addictions, job loss, genetic repercussions, the human impact from nanotechnologies, AI, and the Internet of Things.

To whom do we turn when a 14-year-old becomes addicted to her smartphone or obsessed with her social media popularity? We could condemn the parents for lack of supervision, but many of them are equally distracted. Who is responsible for the misuse of a drone to vandalize property or fire a gun or the anticipated 1 billion drones flying around by 2030? [4] Who will answer for the repercussions of artificial intelligence that spouts hate speech? Where will the buck stop when genetic profiling becomes a requirement for getting insured or getting a job?

While the backlash against these types of unintended consequences or unforeseen circumstances are not yet widespread and citizens have not taken to the streets in mass protests, behavioral and social changes like these may be imminent as a result of dozens of transformational technologies currently under development in labs and R&D departments across the globe. Who is looking at the unforeseen or the unintended? Who is paying attention and who is turning a blind eye?

It was possible to have anticipated texting and driving. It is possible to anticipate a host of horrific side effects from nanotechnology to both humans and the environment. It’s possible to tag the ever-present bad actor to any number of new technologies. It is possible to identify when the race to master artificial intelligence may be coming at the expense of making it safe or drawing the line. In fact, it is a marketing opportunity for corporate interests to take the lead and the leverage their efforts to preempt adverse side effects as a distinctive advantage.

Emphasizing humane technology is an automatic benefit for an ethical company, and for those more concerned with profit than ethics, (just between you and me) it offers the opportunity for a better brand image and (at least) the appearance of social concern. Whatever the motivation, we are looking at a future where we are either prepared for what happens next, or we are caught napping.

This responsibility should start with anticipatory methodologies that examine the social, cultural and behavioral ramifications, and unintended consequences of what we create. Designers and those trained in design research are excellent collaborators. My brand of design fiction is intended to take us into the future in an immersive and visceral way to provoke the necessary discussion and debate that anticipate the storm should there be one, but promising utopia is rarely the tinder to fuel a provocation. Design fiction embraces the art critical thinking and thought problems as a means of anticipating conflict and complexity before these become problems to be solved.

Ultimately we have to depart from the idea that technology will be the magic pill to solve the ills of humanity, design fiction, and other anticipatory methodologies can help to acknowledge our humanness and our propensity to foul things up. If we do not self-regulate, regulation will inevitably follow, probably spurred on by some unspeakable tragedy. There is an opportunity, now for the corporation to step up to the future with a responsible, thoughtful compassion for our humanity.



1. https://www.statista.com/statistics/330695/number-of-smartphone-users-worldwide/

2. http://theenvisionist.com/2017/08/04/now-2/

3. http://theenvisionist.com/2017/03/24/genius-panel-concerned/

4. http://www.abc.net.au/news/2017-08-31/world-of-drones-congress-brisbane-futurist-thomas-frey/8859008

Bookmark and Share

Because we can.


It has happened to me more than once. I come up with what I think is a brilliant and seemingly original idea, do some preliminary research to make sure there aren’t already a hundred other ideas (at least published ones) just like it, and then I set to work sketching it out. Then, (and it could be a matter of days to weeks) BAM, there is my idea fully fleshed out, rendered and published—by someone else. I usually end up kicking myself for not having thought of it sooner or at least bringing it to fruition somehow instantaneously. The reality is, however, that for that fully rendered version to get published the creator(s) would have had to come up with the idea before me. Perhaps this amplifies the notion that there are no original ideas left in the world. Or, as an old friend used to argue, these concepts are floating around in a kind of ever-changing, cosmic psychosphere from which creative minds serendipitously siphon their ideas. So, of course, we’re going to have the same thoughts, we drink the same water. I think, perhaps the former.

Using this as a backdrop, however, I examine the idea of the so-called white hat hacker. There are hackers out there (good guys reportedly) that are always looking for new possible threats and vulnerabilities to the world of code, systems, software, and platforms. Sometimes their pursuits are purely imaginary, taking on the form of “What if?” scenarios, and then rolling up their sleeves to see if they can infect or penetrate the system or software in question. Then, in their benevolence, they share it with the world to make code and systems safer for all of us. Hmm. Okay, I’ll play along.

Recently, a team like this encoded some malware into physical strands of DNA. Huh? The story was reported by WIRED’s (man I wish they’d stick to technology reporting) Andy Greenberg last week. In theory, because DNA can maintain its structure for hundreds of years or more, you could theoretically store data within its indelible strands. (Remember the mosquitos frozen in amber from Jurassic Park?) And even though DNA is electron-microscope-small it is still a physical thing, full of code all its own. So, it would seem that a University of Washington computer science professor decided to slip some malware code into a strand of physical DNA and then when the code is deciphered or uploaded so to speak, the malware is in the system.

“‘We know that if an adversary has control over the data a computer is processing, it can potentially take over that computer,” says Tadayoshi Kohno, the University of Washington computer science professor who led the project, comparing the technique to traditional hacker attacks that package malicious code in web pages or an email attachment…’”

In this case, it is,

“‘…the information stored in the DNA they’re sequencing.’”

I don’t know. I’m not sure hackers should be messing with this stuff. PHOTO: wallpaperup


So hacking into some DNA sequencing software gets you what? There is apparently the opportunity (if you make rival DNA sequencing software) to steal some intellectual property or a malcontent could screw with somebody’s DNA analysis, you could plant some malware into your GMO tomatoes to keep prying eyes from your secret formula, but these sound like remote scenarios at best.

“Regardless of any practical reason for the research, however, the notion of building a computer attack—known as an “exploit”—with nothing but the information stored in a strand of DNA represented an epic hacker challenge for the University of Washington team.” [emphasis mine]

Here’s an ethical conundrum for me: no practical reason for the research. Do these guys have too much time on their hands (and too much funding)? Are they genuinely hoping to do some good? Or are they doing stuff like this because they can and if it happens to open a can of worms in the process, well, at least we can publish a paper on it? Or maybe it’s just an epic hacker challenge.

So, as radically out there as all this tinkering is, it is safe to say (back to my original point) someone else is or has thought of it too. Could someone Crispr a slice of DNA malware into the human genome to screw with someone’s pacemaker? Or perhaps could it just linger and wreak havoc at some later date? Maybe I’m not smart enough to think of all the horrific or diabolical downsides, but after all it is DNA. I can only imagine that, in light of this new research, someone will come up with a diabolical downside. Therein lies the dilemma.

For me, if you’re tinkering with DNA and you haven’t thought about the diabolical downsides you’re as reckless as a couple of kids skateboarding through speeding traffic. Someone’s going to get hurt. And there’s that word again. Reckless. Why is research money going toward things that have no practical reason? Maybe so that someone, not so kind will come up with one.


Harmless? What do you think?

Bookmark and Share

How should we talk about the future?


Imagine that there are two camps. One camp holds high confidence that the future will be manifestly bright and promising in all aspects of human endeavor. Our health will dramatically improve as we eradicate disease and possibly even death. Artificial Intelligence will be at our beck and call to make our tough decisions, order our lives, fight our wars, watch over us, and keep us safe. Hence, it is full speed ahead. The positives outweigh the negatives. Any missteps will be but a minor hiccup, and we’ll cross those bridges when we come to them.

The second camp believes that many of these promises are achievable. But they also believe that we are beginning to see strong evidence that technology is indeed moving exponentially and that we are at a trajectory point in the curve that where will see what many experts have categorized as impossible or a “long way off” now is knocking at our door.

Kurzweil’s Law of Accelerating Returns, is proving remarkably accurate. Sure we adapted from the horse and buggy to the automobile, and from there to air travel, to an irritatingly resilient nuclear threat, to computers, and smartphones and DNA sequencing. But these changes are arriving more rapidly than their predecessors.

“‘As exponential growth continues to accelerate into the first half of the twenty-first century,’ [Kurzweil] writes. ‘It will appear to explode into infinity, at least from the limited and linear perspective of contemporary humans.’”1

The second camp sees this rapid-fire proliferation as alarming. Not because we will get to utopia faster, but because we will be standing in the midst of a host of disruptive technologies all coming to fruition at the same time without the benefit of meaningful oversight or the engagement of our societies.

I am in the second camp.

Last week, I talked about genetic engineering. The designer-baby question was always pushed aside as a long way off. Not anymore. That’s just one change. Our privacy, in the form of “big data,” from seemingly innocent pastimes such as Facebook, is being severely compromised. According to security technologist Bruce Schneier,

“Facebook can predict race, personality, sexual orientation, political ideology, relationship status, and drug use on the basis of Like clicks alone. The company knows you’re engaged before you announce it, and gay before you come out—and its postings may reveal that to other people without your knowledge or permission. Depending on the country you live in, that could merely be a major personal embarrassment—or it could get you killed.”

Facebook is just one of the seemingly benign things we do every day. By now, most of us consider that using our smartphones 75 percent of our day is also harmless, though we would also have to agree that it has changed us personally, behaviorally, and societally. And while the societal outcry against designer babies has been noticeable since last weeks stories about CrisprCas9 gene splicing with human embryos, how long will it be before we accept it as the norm, and feel pressure in our own families to participate to stay competitive, or maybe even just to be insured.

The fact is that we like to think that we can adapt to anything. To some extent, we pride ourselves on this resilience. Unfortunately, that seems to suggest that we are also powerless to affect these technologies and that we have no say in when, if, or whether we should make them in the first place. Should we be proud of the fact that we are adapting to a complete lack of privacy, to the likelihood of terrorism or being replaced by an AI? These are my questions.

So I am encouraged when others also raise these questions. Recently, the tech media which seems to be perpetually enamored of folks like Mark Zuckerberg and Elon Musk, called Zuckerberg a “bad futurist” because of his over optimistic view of the future.

The article came from the Huffington post’s Rebecca Searles.
According to Searles,

“Elon Musk’s doomsday AI predictions aren’t “irresponsible,” but Mark Zuckerberg’s techno-optimism is.”3

According to a Zuckerberg podcast,

“…people who are arguing for slowing down the process of
building AI, I just find that really questionable… If you’re arguing against AI, then you’re arguing against safer cars that aren’t going to have accidents and you’re arguing against being able to better diagnose people when they’re sick.”3

Technology hawks are always promising safer, and healthier as their rationale for unimpeded acceleration. I’m sure that’s the rah-rah rationale for designer babies, too. Think of all the illnesses we will be able to breed out of the human race. Searles and I agree that negative outcomes deserve equally serious consideration as well, and not after they happen. As she aptly puts it,

“Tackling tech challenges with a build-it-and-see-what-happens approach (a la Zuckerberg’s former “move fast and break things” development mantra) just isn’t suitable for AI.”

The problem is, that Zuckerberg is not alone, nor is last weeks
Shoukhrat Mitalipov. Ultimately, this reality of two camps is the rationale behind my approach to design fiction. As you know, the objective of design fiction is to provoke. Promising utopia is rarely the tinder to fuel a provocation.

Let’s remember Charles Dickens’ story of Ebenezer Scrooge. The ghost of Christmas past takes him back in time where, for the first time, he sees the truth about his past. But this revelation does not change him. Then the ghost of Christmas present opens his eyes to everything around him that he is blind to in the present. Still, Scrooge is unaffected. And finally, the ghost of Christmas future takes him into the future, and it is here that Scrooge sees the days to come as “the way it will be” unless he changes something now.

Somehow, I think the outcome would have been different if that last ghost said, ”Don’t worry. You’ll adapt.”

Let’s not talk about the future in purely utopian terms nor total doom-and-gloom. The future will not be like one or the other any more than is the present day. But let us not be blind to our infinite capacity to foul things up, to the potential of bad actors or the inevitability of unanticipated consequences. If we have any hope of meeting our future with the altruistic image of a utopian society, let us go forward with eyes open.


1. http://www.businessinsider.com/ray-kurzweil-law-of-accelerating-returns-2015-5

2. “Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World”

3. http://www.huffingtonpost.com/entry/mark-zuckerberg-is-a-bad-futurist_us_5979295ae4b09982b73761f0

Bookmark and Share

The end of code.


This week WIRED Magazine released their June issue announcing the end of code. That would mean that the ability to write code, as is so cherished in the job world right now, is on the way out. They attribute this tectonic shift to Artificial Intelligence, machine learning, neural networks and the like. In the future (which is taking place now) we won’t have to write code to tell computers what to do, we will just have to teach them. I have been over this before through a number of previous writings. An example: Facebook uses a form of machine learning by collecting data from millions of pictures that are posted on the social network. When someone loads a group photo and identifies the people in the shot, Facebook’s AI remembers it by logging the prime coordinates on a human face and attributing them to that name (aka facial recognition). If the same coordinates show up again in another post, Facebook identifies it as you. People load the data (on a massive scale), and the machine learns. By naming the person or persons in the photo, you have taught the machine.

The WIRED article makes some interesting connections about the evolution of our thinking concerning the mind, about learning, and how we have taken a circular route in our reasoning. In essence, the mind was once considered a black box; there was no way to figure it out, but you could condition responses, a la Pavlov’s Dog. That logic changes with cognitive science which is the idea that the brain is more like a computer. The computing analogy caught on, and researchers began to see the whole idea of thought, memory, and thinking as stuff you could code, or hack, just like a computer. Indeed, it is this reasoning that has led to the notion that DNA is, in fact, codable, hence splicing through Crispr. If it’s all just code, we can make anything. That was the thinking. Now there is machine learning and neural networks. You still code, but only to set up the structure by which the “thing” learns, but after that, it’s on its own. The result is fractal and not always predictable. You can’t go back in and hack the way it is learning because it has started to generate a private math—and we can’t make sense of it. In other words, it is a black box. We have, in effect, stymied ourselves.

There is an upside. To train a computer you used to have to learn how to code. Now you just teach it by showing or giving it repetitive information, something anyone can do, though, at this point, some do it better than others.

Always the troubleshooter, I wonder what happens when we—mystified at a “conclusion” or decision arrived at by the machine—can’t figure out how to make it stop arriving at that conclusion. You can do the math.

Do we just turn it off?

Bookmark and Share

Adapt or plan? Where do we go from here?

I just returned from Nottingham, UK where I presented a paper for Cumulus 16, In This Place. The paper was entitled Design Fiction: A Countermeasure For Technology Surprise. An Undergraduate Proposal. My argument hinged on the idea that students needed to start thinking about our technosocial future. Design fiction is my area of research, but if you were inclined to do so, you could probably choose a variant methodology to provoke discussion and debate about the future of design, what designers do, and their responsibility as creators of culture. In January, I had the opportunity to take an initial pass at such a class. The experiment was a different twist on a collaborative studio where students from the three traditional design specialties worked together on a defined problem. The emphasis was on collaboration rather than the outcome. Some students embraced this while others pushed back. The push-back came from students fixated on building a portfolio of “things” or “spaces” or “visual communications“ so that they could impress prospective employers. I can’t blame them for that. As educators, we have hammered the old paradigm of getting a job at Apple or Google, or (fill in the blank) as the ultimate goal of undergraduate education. But the paradigm is changing and the model of a designer as the maker of “stuff” is wearing thin.

A great little polemic from Cameron Tonkinwise recently appeared that helped to articulate this issue. He points the finger at interaction design scholars and asks why they are not writing about or critiquing “the current developments in the world of tech.” He wonders whether anyone is paying attention. As designers and computer scientists we are feeding a pipeline of more apps with minimal viability, with seemingly no regard for the consequences on social systems, and (one of my personal favorites) the behaviors we engender through our designs.

I tell my students that it is important to think about the future. The usual response is, “We do!” When I drill deeper, I find that their thoughts revolve around getting a job, making a living, finding a home, and a partner. They rarely include global warming, economic upheavals, feeding the world, natural disasters, etc. Why? These issues they view as beyond their control. We do not choose these things; they happen to us. Nevertheless, these are precisely the predicaments that need designers. I would argue these concerns are far more important than another app to count my calories or select the location for my next sandwich.

There is a host of others like Tonkinwise that see that design needs to refocus, but often it seems like there are are a greater number that blindly plod forward unaware of the futures they are creating. I’m not talking about refocusing designers to be better at business or programming languages; I’m talking about making designers more responsible for what they design. And like Tonkinwise, I agree that it needs to start with design educators.

Bookmark and Share

The nature of the unpredictable.


Following up on last week’s post, I confessed some concern about technologies that progress too quickly and combine unpredictably.

Stewart Brand introduced the 1968 Whole Earth Catalog with, “We are as gods and might as well get good at it.”1 Thirty-two years later, he wrote that new technologies such as computers, biotechnology and nanotechnology are self-accelerating, that they differ from older, “stable, predictable and reliable,” technologies such as television and the automobile. Brand states that new technologies “…create conditions that are unstable, unpredictable and unreliable…. We can understand natural biology, subtle as it is because it holds still. But how will we ever be able to understand quantum computing or nanotechnology if its subtlety keeps accelerating away from us?”2. If we combine Brand’s concern with Kurzweil’s Law of Accelerating Returns and the current supporting evidence exponentially, as the evidence supports, will it be as Brand suggests unpredictable?

Last week I discussed an article from WIRED Magazine on the VR/MR company Magic Leap. The author writes,

“Even if you’ve never tried virtual reality, you probably possess a vivid expectation of what it will be like. It’s the Matrix, a reality of such convincing verisimilitude that you can’t tell if it’s fake. It will be the Metaverse in Neal Stephenson’s rollicking 1992 novel, Snow Crash, an urban reality so enticing that some people never leave it.”

And it will be. It is, as I said last week, entirely logical to expect it.

We race toward these technologies with visions of mind-blowing experiences or life-changing cures, and usually, we imagine only the upside. We all too often forget the human factor. Let’s look at some other inevitable technological developments.
• Affordable DNA testing will tell you your risk of inheriting a disease or debilitating condition.
• You can ingest a pill that tells your doctor, or you in case you forgot, that you took your medicine.
• Soon we will have life-like robotic companions.
• Virtual reality is affordable, amazingly real and completely user-friendly.

These are simple scenarios because they will likely have aspects that make them even more impressive, more accessible and more profoundly useful. And like most technological developments, they will also become mundane and expected. But along with them come the possibility of a whole host of unintended consequences. Here are a few.
• The government’s universal healthcare requires that citizens have a DNA test before you qualify.
• It monitors whether you’ve taken your medication and issues a fine if you don’t, even if you don’t want your medicine.
• A robotic, life-like companion can provide support and encouragement, but it could also be your outlet for violent behavior or abuse.
• The virtual world is so captivating and pleasurable that you don’t want to leave, or it gets to the point where it is addicting.

It seems as though whenever we involve human nature, we set ourselves up for unintended consequences. Perhaps it is not the nature of technology to be unpredictable; it is us.

1. Brand, Stewart. “WE ARE AS GODS.” The Whole Earth Catalog, September 1968, 1-58. Accessed May 04, 2015. http://www.wholeearth.com/issue/1010/article/195/we.are.as.gods.
2. Brand, Stewart. “Is Technology Moving Too Fast? Self-Accelerating Technologies-Computers That Make Faster Computers, For Example – May Have a Destabilizing Effect on .Society.” TIME, 2000
Bookmark and Share

A Science Fiction Graphic Novel About Design and the Human Condition

Page 100

We’ve reached page 100 and in some cases, The Lightstream Chronicles is already longer than many graphic novels. Nevertheless, as meaty as the author has worked it to be, there is so much more in the developing story. I was asked recently, “Where is it going?”

Expect some intrigue, angst and an action packed climax, but as with most science fiction and even design fiction, it is about people.

If you know anything about the author, you know that I’m a designer, heavily ensconced in research in the area of Design Fiction, Speculative Design, and Design Futures. The Lightstream Chronicles is a foray into a future world where we, like it or not, have been changed by the design and technology that we have embraced over the years. We are different. Our behaviors and expectations have changed. This is what design does to society and culture. Don’t get me wrong; it is not necessarily a bad thing. Design is a product of which we are as human beings. It is a reflection of humanity. Hence, it will reflect both bad and good, something that I believe is not a “fixable” tweak in our DNA. It is the essence of our design. In many respects, without it, we cease to be human. We have the choice between good and evil and depending on what we choose, our design and the various manifestations of it will reflect those choices.

As I wrote,

“In The Lightstream Chronicles, the author creates a science fiction graphic novel and asks that the reader ponder the same self-rationalizing tendency as it applies to slick new enhancing technologies and the “design” decisions that fostered them. It looks at not only the option to make the decision, but the ethics of whether the decision should be made, as well as society’s competency to choose wisely.1”

Perhaps then, it becomes a graphic novel about the human condition. In a way then, it is like most fiction, but it is that and more. It also examines where we find meaning, especially when most of what we would consider our greatest fears—of death, disease, physical or mental decline, of enough food and water, sustaining the environment or having enough energy—have vanished. Is it enough to satisfy us, to fulfill us, and give us meaning or does it leave us wanting?

The only thing that seems to have survived the grasp of man and his ability to wipe it away is evil. The perfection of synthetic humans would seem to be the answer, though even then, man has found a way to twist them. And if we become the creators are not our creations still made in our image?

What do you think?


1.Denison, E. Scott. When Designers Ask, “What If?”. Electronic MFA Thesis. Ohio State University, 2013. OhioLINK Electronic Theses and Dissertations Center.
Bookmark and Share

Who is paying attention to the future? You’re standing in it. 

If you are familiar with this blog you can that tell that I am enamored of future tech, but at the same time my research in design fiction often is intended to provoke discussion and debate on whether these future technologies are really as wonderful as they are painted to be. Recently, I stumbled across a 2012 article from the Atlantic.com (recommended) magazine (Hessel and Goodman) that painted a potentially alarming picture of the future of biotech or synthetic biology, known as synbio. The article is lengthy, and their two-year-old predictions have already been surpassed, but it first reminds us of how technology, historically and currently, builds not in a linear progression, but exponentially like Moore’s Law. This is an oft quoted precept of Ray Kurzweil, chief futurist for Google and all around genius guy, for the reason that we are avalanching toward the Singularity. The logic of exponential growth in technology is pretty much undeniable at this point.

Hessel and Goodman take us through a bit of verbal design fiction where in the very near future it will be possible to create new DNA mathematically, to create new strains of bacteria, and new forms of life for good and for not so good. The article also underscores for me how technology is expanding beyond any hope of regulatory control, ethical considerations or legal ramifications. No one has time to consider the abuse of “good technology” or the unintended consequences that inevitably follow from any new idea.  If you are one of those people who, in an attempt to get through all the things you have to read by taking in only the intro and the conclusion. Here is a good take away from the article:

“The historical trend is clear: Whenever novel technologies enter the market, illegitimate uses quickly follow legitimate ones. A black market soon appears. Thus, just as criminals and terrorists have exploited many other forms of technology, they will surely soon turn to synthetic biology, the latest digital frontier.”

If you want to know how they dare make that assertion you will have to read the article and it is not a stretch. The unintended consequences are staggering to say the least.

Of course, these authors are only dealing with one of dozens if not hundreds of new technologies that because of the exponential rate of advancement are hanging over us like a canopy filling with water. Sooner or later, preferably sooner, we will —all of us—demand to bring these ideas into collaborative discussion.

In addition to my research, I write fiction. Call it science fiction or design fiction. It doesn’t matter to me. As dystopic as The Lightstream Chronicles may seem to my readers, in many ways I think that humanity will be lucky to live that long—unless we get a handle on what we’re doing now.

Some links for the incredulous:




Bookmark and Share

6 everyday things that have disappeared in the 22nd century.

As you know, The Lightstream Chronicles is a cyberpunk graphic novel set in the year 2159. A lot has changed. Last week we looked at 10 futuristic technologies that are more or less ubiquitous in that time. This week we’ll look at 5 things that have nearly disappeared.

1. Death

In the 22nd century, death is optional. Medicine has eliminated nearly all forms of disease see (#6) and genetics has isolated the gene that causes aging. The aging gene can be switched on and off (usually in a human’s 2nd decade), through a simple medical procedure. Living forever is not for everyone, however. The suicide rate in New Asia is extremely high. Apparently, after a hundred years some people actually get bored with it all. Taking a dive off the Top City Spanner or jumping in front of a mag lev train are the most popular methods of suicide. Some humans can choose natural death over an unlimited lifespan. They are known as agers. They may take advantage of replacement organs, or other enhancements but avoid the genetic tinkering to stop the aging process. Average life expectancy of an ager is around 148 years. Despite the most popular enhancements, agers often find themselves as social oddities.

2. Religion

As the result of a brief, but bloody war executed by drones and initiated from rivalries in the Middle East (known as the Drone Wars), millions died. Religion and politics were blamed but politics survived. Religious assembly became illegal and all faiths were included, and while individuals are permitted to believe or worship anything they want, it must be kept private; no evangelizing or congregating is permitted. An individual can still visit a priest, mullah, or rabbi but it must be one-on-one. When it comes to morality, (that could be item number 7 in this list) the government has had to legislate to stave off a widespread moral decay. For more than 60 years the ban on religion has been tightly monitored, however in the last few years it has not been as rigidly enforced. Those who practice their faith in private are “tagged” as such in their profiles and they tend to come under more scrutiny than non-religious. The government knows everything.

3. Privacy

This brings us to privacy. I’ve written extensively about the Mesh network that sees everything. It was developed as a deterrent to crime and is quite successful at that most of the time. The network enables “impartial” software to monitor anything that constitutes “suspicious” activity. What constitutes suspicious activity? The law of the land is contained in the multi-volume, Hong Kong Protocols where most of what is considered illegal is that which infringes on the rights of another. Therefore, almost anything that is individual, or consensual is within the law. For the system to work, however, it needs to see everything. Most of the public has grown accustomed to the idea that every waking and sleeping moment of their lives, including their thoughts can be, and is monitored. According to recent polls, the public takes comfort in government assurance that no humans are interpreting their activity, and hence, not making any judgments on their behavior no matter how bizarre.

4. Reality

Reality has taken a big hit. Most of the population spends dozens of hours a week living in their minds via the V, (virtual immersions). These programmed immersions are infinitely detailed, environmental and sensory simulations. When you’re in the V, there is no discernible difference from the real world. Participation can occur with the users identity, or by assuming another from limitless combinations of gender, race, and species, and may entail a full range of experiences from a simple day on the beach to the aberrant and perverse. Immersions are highly regulated by the New Asia government. Certain immersive programs are required to have timeout algorithms to prevent a condition known as OB state in which the mind is unable to re-adjust to reality and surface from the immersion, a side effect for individuals who are immersed for more than 24 hours. Certain content is age-restricted and users must receive annual mental and bio statistical fitness assessments to renew their access — all of which is monitored by the government.

And if that isn’t enough to jog your faith in what is real, another departure comes in the area of all things replicated. Replication of inanimate objects is widespread for food, beverages and hard goods. Many insist that there is a difference between a real and replicated apple, thus, “pure-stuffs” are still sold but they are very expensive and scarce. Replication is based on duplicating molecular “fingerprints” of actual objects. With the escalating population and less people dying, replication has saved the world from starvation.
5. Humans

Though this might also fall under the category of reality check, #4  is the lack of real humans; in the technical sense, they very hard to find. For a time, the word post-human, or transhuman was in vogue, but this dissipated. Now the only discernible difference between humans and synthetics seems to be DNA. Everyone is enhanced to some degree. Enhancement itself, has come to mean, “…considerable intervention… beyond the basic human faculties and senses…” There are a host of human enhancements, and nano-level implants that have become common place mostly to adjust brain function and regulate body chemistry; spike adrenalin, induce sleep, reduce stress, enhance sexual activity, release pheromones, communicate telepathically, enhance athletics, muscle tone, elimination of excess fat, etc. Everyone can have the body they want, including more fingers, toes, or other innovative additions, and if it isn’t available from their own DNA, it can be spliced in the lab to enable the growth of fur, a tail, or other combinations.

6. Disease and illness

Though many diseases in the 21st century were thought to be genetic in origin, medicine turned its focus to the cellular level. This provided the cancer breakthrough and eventually almost anything that can wreak havoc on the human body, particularly at the cellular level, has been brought under control. This includes cancer, neurological, and muscle diseases, organ failures, and old age. Then genetic engineering fine-tuned the genome to enable zero-defect births and isolated the genes that cause aging.

Since most cellular damage is done through abuse and environmental toxins many people may still choose to smoke, or put other damaging substances into their bodies with the assurance that diseased lungs, livers and kidneys can be grown in the lab from their own DNA and replaced on an outpatient basis.

Taxes are still collected.


Bookmark and Share