The end of code.

 

This week WIRED Magazine released their June issue announcing the end of code. That would mean that the ability to write code, as is so cherished in the job world right now, is on the way out. They attribute this tectonic shift to Artificial Intelligence, machine learning, neural networks and the like. In the future (which is taking place now) we won’t have to write code to tell computers what to do, we will just have to teach them. I have been over this before through a number of previous writings. An example: Facebook uses a form of machine learning by collecting data from millions of pictures that are posted on the social network. When someone loads a group photo and identifies the people in the shot, Facebook’s AI remembers it by logging the prime coordinates on a human face and attributing them to that name (aka facial recognition). If the same coordinates show up again in another post, Facebook identifies it as you. People load the data (on a massive scale), and the machine learns. By naming the person or persons in the photo, you have taught the machine.

The WIRED article makes some interesting connections about the evolution of our thinking concerning the mind, about learning, and how we have taken a circular route in our reasoning. In essence, the mind was once considered a black box; there was no way to figure it out, but you could condition responses, a la Pavlov’s Dog. That logic changes with cognitive science which is the idea that the brain is more like a computer. The computing analogy caught on, and researchers began to see the whole idea of thought, memory, and thinking as stuff you could code, or hack, just like a computer. Indeed, it is this reasoning that has led to the notion that DNA is, in fact, codable, hence splicing through Crispr. If it’s all just code, we can make anything. That was the thinking. Now there is machine learning and neural networks. You still code, but only to set up the structure by which the “thing” learns, but after that, it’s on its own. The result is fractal and not always predictable. You can’t go back in and hack the way it is learning because it has started to generate a private math—and we can’t make sense of it. In other words, it is a black box. We have, in effect, stymied ourselves.

There is an upside. To train a computer you used to have to learn how to code. Now you just teach it by showing or giving it repetitive information, something anyone can do, though, at this point, some do it better than others.

Always the troubleshooter, I wonder what happens when we—mystified at a “conclusion” or decision arrived at by the machine—can’t figure out how to make it stop arriving at that conclusion. You can do the math.

Do we just turn it off?

Bookmark and Share

Adapt or plan? Where do we go from here?

I just returned from Nottingham, UK where I presented a paper for Cumulus 16, In This Place. The paper was entitled Design Fiction: A Countermeasure For Technology Surprise. An Undergraduate Proposal. My argument hinged on the idea that students needed to start thinking about our technosocial future. Design fiction is my area of research, but if you were inclined to do so, you could probably choose a variant methodology to provoke discussion and debate about the future of design, what designers do, and their responsibility as creators of culture. In January, I had the opportunity to take an initial pass at such a class. The experiment was a different twist on a collaborative studio where students from the three traditional design specialties worked together on a defined problem. The emphasis was on collaboration rather than the outcome. Some students embraced this while others pushed back. The push-back came from students fixated on building a portfolio of “things” or “spaces” or “visual communications“ so that they could impress prospective employers. I can’t blame them for that. As educators, we have hammered the old paradigm of getting a job at Apple or Google, or (fill in the blank) as the ultimate goal of undergraduate education. But the paradigm is changing and the model of a designer as the maker of “stuff” is wearing thin.

A great little polemic from Cameron Tonkinwise recently appeared that helped to articulate this issue. He points the finger at interaction design scholars and asks why they are not writing about or critiquing “the current developments in the world of tech.” He wonders whether anyone is paying attention. As designers and computer scientists we are feeding a pipeline of more apps with minimal viability, with seemingly no regard for the consequences on social systems, and (one of my personal favorites) the behaviors we engender through our designs.

I tell my students that it is important to think about the future. The usual response is, “We do!” When I drill deeper, I find that their thoughts revolve around getting a job, making a living, finding a home, and a partner. They rarely include global warming, economic upheavals, feeding the world, natural disasters, etc. Why? These issues they view as beyond their control. We do not choose these things; they happen to us. Nevertheless, these are precisely the predicaments that need designers. I would argue these concerns are far more important than another app to count my calories or select the location for my next sandwich.

There is a host of others like Tonkinwise that see that design needs to refocus, but often it seems like there are are a greater number that blindly plod forward unaware of the futures they are creating. I’m not talking about refocusing designers to be better at business or programming languages; I’m talking about making designers more responsible for what they design. And like Tonkinwise, I agree that it needs to start with design educators.

Bookmark and Share

The nature of the unpredictable.

 

Following up on last week’s post, I confessed some concern about technologies that progress too quickly and combine unpredictably.

Stewart Brand introduced the 1968 Whole Earth Catalog with, “We are as gods and might as well get good at it.”1 Thirty-two years later, he wrote that new technologies such as computers, biotechnology and nanotechnology are self-accelerating, that they differ from older, “stable, predictable and reliable,” technologies such as television and the automobile. Brand states that new technologies “…create conditions that are unstable, unpredictable and unreliable…. We can understand natural biology, subtle as it is because it holds still. But how will we ever be able to understand quantum computing or nanotechnology if its subtlety keeps accelerating away from us?”2. If we combine Brand’s concern with Kurzweil’s Law of Accelerating Returns and the current supporting evidence exponentially, as the evidence supports, will it be as Brand suggests unpredictable?

Last week I discussed an article from WIRED Magazine on the VR/MR company Magic Leap. The author writes,

“Even if you’ve never tried virtual reality, you probably possess a vivid expectation of what it will be like. It’s the Matrix, a reality of such convincing verisimilitude that you can’t tell if it’s fake. It will be the Metaverse in Neal Stephenson’s rollicking 1992 novel, Snow Crash, an urban reality so enticing that some people never leave it.”

And it will be. It is, as I said last week, entirely logical to expect it.

We race toward these technologies with visions of mind-blowing experiences or life-changing cures, and usually, we imagine only the upside. We all too often forget the human factor. Let’s look at some other inevitable technological developments.
• Affordable DNA testing will tell you your risk of inheriting a disease or debilitating condition.
• You can ingest a pill that tells your doctor, or you in case you forgot, that you took your medicine.
• Soon we will have life-like robotic companions.
• Virtual reality is affordable, amazingly real and completely user-friendly.

These are simple scenarios because they will likely have aspects that make them even more impressive, more accessible and more profoundly useful. And like most technological developments, they will also become mundane and expected. But along with them come the possibility of a whole host of unintended consequences. Here are a few.
• The government’s universal healthcare requires that citizens have a DNA test before providing you qualify.
• It monitors whether you’ve taken your medication and issues a fine if you don’t, even if you don’t want your medicine.
• A robotic, life-like companion can provide support and encouragement, but it could also be your outlet for violent behavior or abuse.
• The virtual world is so captivating and pleasurable that you don’t want to leave, or it gets to the point where it is addicting.

It seems as though whenever we involve human nature, we set ourselves up for unintended consequences. Perhaps it is not the nature of technology to be unpredictable; it is us.

1. Brand, Stewart. “WE ARE AS GODS.” The Whole Earth Catalog, September 1968, 1-58. Accessed May 04, 2015. http://www.wholeearth.com/issue/1010/article/195/we.are.as.gods.
2. Brand, Stewart. “Is Technology Moving Too Fast? Self-Accelerating Technologies-Computers That Make Faster Computers, For Example – May Have a Destabilizing Effect on .Society.” TIME, 2000
Bookmark and Share

Design fiction. I want to believe.

 

I have blogged in the past about logical succession. When it comes to creating realistic design fiction narrative, there needs to be a sense of believability. Coates1 calls this “plausible reasoning.”, “[…]putting together what you know to create a path leading to one or several new states or conditions, at a distance in time.” In other words, for the audience to suspend their disbelief, there has to be a basic understanding of how we got here. If you depict something that is too fantastic, your audience won’t buy it, especially if you are trying to say that, “This could happen.”

“When design fictions are conceivable and realistically executed they carry a greater potential for making an impact and focusing discussion and debate around these future scenarios.”2

In my design futures collaborative studio, I ask students to do a rigorous investigation of future technologies, the ones that are on the bleeding edge. Then I want them to ask, “What if?” It is easier said than done. Particularly because of technological convergence, the way technologies merge with other technologies to form heretofore unimagined opportunities.

There was an article this week in Wired Magazine concerning a company called Magic Leap. They are in the MR business, mixed reality as opposed to virtual reality. With MR, the virtual imagery happens within the space you’re in—in front of your eyes—rather than in an entirely virtual space. The demo from Wired’s site is pretty convincing. The future of MR and VR, for me, are easy to predict. Will it get more realistic? Yes. Will it get cheaper, smaller, and ubiquitous? Yes. At this point, a prediction like this is entirely logical. Twenty-five years ago it would not have been as easy to imagine.

As the Wired article states,

“[…]the arrival of mass-market VR wasn’t imminent.[…]Twenty-five years later a most unlikely savior emerged—the smartphone! Its runaway global success drove the quality of tiny hi-res screens way up and their cost way down. Gyroscopes and motion sensors embedded in phones could be borrowed by VR displays to track head, hand, and body positions for pennies. And the processing power of a modern phone’s chip was equal to an old supercomputer, streaming movies on the tiny screen with ease.”

To have predicted that VR would be where it is today with billions of dollars pouring into fledgling technologies and realistic, and utterly convincing demonstrations would have been illogical. It would have been like throwing a magnet into a bucket of nails, rolling it around and guessing which nails would end up coming out attached.

What is my point? I think it is important to remind ourselves that things will move blindingly fast particularly when companies like Google and Facebook are throwing money at them. Then, the advancement of one only adds to the possibilities of the next iteration possibly in ways that no one can predict. As VR or MR merges with biotech or artificial reality, or just about anything else you can imagine, the possibilities are endless.

Unpredictable technology makes me uncomfortable. Next week I’ll tell you why.

 

  1. Coates, J.F., 2010. The future of foresight—A US perspective. Technological Forecasting & Social Change 77, 1428–1437.
  2. E. Scott Denison. “Timed-release Design Fiction: A Digital Online Methodology to Provoke Reflection on our Socio- Technological Future.”  Edited by Michal Derda Nowakowski. ISBN: 978-1-84888-427-4 Interdisciplinary.net.
Bookmark and Share

Are we ready to be gods? Revisited.

 

I base today’s blog on a 2013 post with a look at the world from the perspective of The Lightstream Chronicles, which takes place in the year 2159. To me, this is a very plausible future. — ESD

 

There was a time when crimes were simpler. Humans committed crimes against other humans — not so simple anymore. In that world, you have the old-fashioned mano a mano, but you also have human against synthetic, and synthetic against the human. There are creative variations as well.

It was bound to happen. No sooner than the first lifelike robots became commercially available in the late 2020’s, there were issues of ethics and misuse. Though scientists and ethicists discussed the topic in the early part of the 21st century, the problems escalated faster than the robotics industry had conceived possible.

According to the 2007 Roboethics Roadmap,

“…problems inherent in the possible emergence of human function in the robot: like consciousness, free will, self-consciousness, sense of dignity, emotions, and so on. Consequently, this is why we have not examined problems — debated in literature — like the need not to consider robot as our slaves, or the need to guarantee them the same respect, rights and dignity we owe to human workers.”1

In the 21st century many of the concerns within the scientific community centered around what we as humans might do to infringe upon the “rights” of the robot. Back in 2007, it occurred to researchers that the discussion of roboethics needed to include more fundamental questions regarding the ethics of the robots’ designers, manufacturers and users. However, once in the role of the creator-god, they did not foresee how “unprepared” for that responsibility we were as a society, and how quickly humans would pervert the robot for formerly “unethical” uses, including but not limited to their modification for crime and perversion.

Nevertheless, more than 100 years later, when synthetic human production is at the highest levels in history, the questions of ethics in both humans and their creations remain a significant point of controversy. As the 2007 Roboethics Roadmap concluded, “It is absolutely clear that without a deep rooting of Roboethics in society, the premises for the implementation of an artificial ethics in the robots’ control systems will be missing.”

After these initial introductions of humanoid robots, now seen as almost comically primitive, the technology, and in turn the reasoning, emotions, personality and realism became progressively more sophisticated. Likewise, their implementations became progressively more like the society that manufactured them. They became images of their creators both benevolent and malevolent.

Schematic1Longm

In our image?

 

 

A series of laws were enacted to prevent the use of humanoid robots for criminal intent, yet at the same time, military interests were fully pursuing dispassionate automated humanoid robots with the express purpose of extermination. It was truly a time of paradoxical technologies. To further complicate the issue were ongoing debates on the nature of what was considered “criminal”. Could a robot become a criminal without human intervention? Is something criminal if it is consensual?

These issues ultimately evolved into complex social, economic, political, and legal entanglement that included heavy government regulation and oversight where such was achievable. As this complexity and infrastructure grew to accommodate the continually expanding technology, the greatest promise and challenges came almost 100 years after those first humanoid robots. With the advent of virtual human brains now being grown in labs, the readily identifiable differences between synthetic humans and real human gradually began to disappear. The similarities were so shocking and so undetectable that new legislation was enacted to restrict the use of virtual humans, and classification system was established to ensure visible distinctions for the vast variety of social synthetics.

The concerns of the very first Roboethics Roadmap are confirmed even 150 years into the future. Synthetics are still abused and used to perpetrate crimes. Their virtual humanness only adds an element of complexity, reality, and in some cases, horror to the creativity of how they are used.

 

 1 Euron Roboethics Roadmap
Bookmark and Share

Nine years from now.

 

Today I’m on my soapbox, again, as an advocate of design thinking, of which design fiction is part of the toolbox.

In 2014, the Pew Research Center published a report on Digital Life in 2025. Therein, “The report covers experts’ views about advances in artificial intelligence (AI) and robotics, and their impact on jobs and employment.” Their nutshell conclusion was that:

“Experts envision automation and intelligent digital agents permeating vast areas of our work and personal lives by 2025, (9 years from now), but they are divided on whether these advances will displace more jobs than they create.”

On the upside, some of the “experts” believe that we will, as the brilliant humans that we are, invent new kinds of uniquely human work that we can’t replace with AI—a return to an artisanal society with more time for leisure and our loved ones. Some think we will be freed from the tedium of work and find ways to grow in some other “socially beneficial” pastime. Perhaps we will just be chillin’ with our robo-buddy.

On the downside, there are those who believe that not only blue-collar, robotic, jobs will vanish, but also white-collar, thinking jobs, and that will leave a lot of people out of work since there are only so many jobs as clerks at McDonald’s or greeters at Wal-Mart. They think that some of these are the fault of education for not better preparing us for the future.

A few weeks ago I blogged about people who are thinking about addressing these concerns with something called Universal Basic Income (UBI), a $12,000 gift to everyone in the world since everyone will be out of work. I’m guessing (though it wasn’t explicitly stated) that this money would come from all the corporations that are raking in the bucks by employing the AI’s, the robots and the digital agents, but who don’t have anyone on the payroll anymore. The advocates of this idea did not address whether the executives at these companies, presumably still employed, will make more than $12,000, nor whether the advocates themselves were on the 12K list. I guess not. They also did not specify who would buy the services that these corporations were offering if we are all out of work. But I don’t want to repeat that rant here.

I’m not as optimistic about the unique capabilities of humankind to find new, uniquely human jobs in some new, utopian artisanal society. Music, art, and blogs are already being written by AI, by the way. I do agree, however, that we are not educating our future decision-makers to adjust adequately to whatever comes along. The process of innovative design thinking is a huge hedge against technology surprise, but few schools have ever entertained the notion and some have never even heard of it. In some cases, it has been adopted, but as a bastardized hybrid to serve business-as-usual competitive one-upmanship.

I do believe that design, in its newest and most innovative realizations, is the place for these anticipatory discussions and future. What we need is thinking that encompasses a vast array of cross-disciplinary input, including philosophy and religion, because these issues are more than black and white, they are ethically and morally charged, and they are inseparable from our culture—the scaffolding that we as a society use to answer our most existential questions. There is a lot of work to do to survive ourselves.

 

 

Bookmark and Share

Artificial intelligence isn’t really intelligence—yet. I hate to say I told you so.

 

Last week, we discovered that there is a new side to AI. And I don’t mean to gloat, but I saw this potential pitfall as fairly obvious. It is interesting that the real world event that triggered all the talk occurred within days of episode 159 of The Lightstream Chronicles. In my story, Keiji-T, a synthetic police investigator virtually indistinguishable from a human, questions the conclusions of an Artificial Intelligence engine called HAPP-E. The High Accuracy Perpetrator Profiling Engine is designed to assimilate all of the minutiae surrounding a criminal act and spit out a description of the perpetrator. In today’s society, profiling is a human endeavor and is especially useful in identifying difficult-to-catch offenders. Though the procedure is relatively new in the 21st century and goes by many different names, the American Psychological Association says,

“…these tactics share a common goal: to help investigators examine evidence from crime scenes and victim and witness reports to develop an offender description. The description can include psychological variables such as personality traits, psychopathologies and behavior patterns, as well as demographic variables such as age, race or geographic location. Investigators might use profiling to narrow down a field of suspects or figure out how to interrogate a suspect already in custody.”

This type of data is perfect for feeding into an AI, which uses neural networks and predictive algorithms to draw conclusions and recommend decisions. Of course, AI can do it in seconds whereas an FBI unit may take days, months, or even years. The way AI works, as I have reported many times before, is based on tremendous amounts of data. “With the advent of big data, the information going in only amplifies the veracity of the recommendations coming out.” In this way, machines can learn which is the whole idea behind autonomous vehicles making split-second decisions about what to do next based on billions of possibilities and only one right answer.

In my sci-fi episode mentioned above, Detective Guren describes a perpetrator produced by the AI known as HAPP-E . Keiji-T, forever the devil’s advocate, counters with this comment, “Data is just data. Someone who knows how a probability engine works could have adopted the characteristics necessary to produce this deduction.” In other words, if you know what the engine is trying to do, theoretically you could ‘teach’ the AI using false data to produce a false deduction.

Episode 159. It seems fairly obvious.

Episode 159. It seems fairly obvious.

I published Episode 159 on March 18, 2016. Then an interesting thing happened in the tech world. A few days later Microsoft launched an AI chatbot called Tay (a millennial nickname for Taylor) designed to have conversations with — millennials. The idea was that Tay would become as successful as their Chinese version named XiaoIce, which has been around for four years and engages millions of young Chinese in discussions of millennial angst with a chatbot. Tay used three platforms: Twitter, Kik and GroupMe.

Then something went wrong. In less than 24 hours, Tay went from tweeting that “humans are super cool” to full-blown Nazi. Soon after Tay launched, the super-sketchy enclaves of 4chan and 8chan decided to get malicious and manipulate the Tay engine feeding it racist and sexist invective. If you feed an AI enough garbage, it will ‘learn’ that garbage is the norm and begin to repeat it. Before Tay’s first day was over, Microsoft took it down, removed the offensive tweets and apologized.

Taygoeswild

Crazy talk.

Apparently, Microsoft, though it had considered that such a thing was possible, but decided not to use filters (conversations to avoid or canned answers to volatile subjects). Experts in the chatbot field were quick to criticize: “‘You absolutely do NOT let an algorithm mindlessly devour a whole bunch of data that you haven’t vetted even a little bit.’ In other words, Microsoft should have known better than to let Tay loose on the raw uncensored torrent of what Twitter could direct her way.”

The tech site, Arstechnica also probed the question of “…why Tay turned nasty when XiaoIce didn’t?” The assessment thus far is that China’s highly restrictive measures keep social media “ideologically appropriate”, and under control. The censors will close your account for bad behavior.

So, what did we learn from this? AI, at least as it exists today, has no understanding. It has no morals and or ethical behavior unless you give it some. Then next questions are: Who decides what is moral and ethical? Will it be the people (we saw what happened with that) or some other financial or political power? Maybe the problem is with the premise itself. What do you think?

Bookmark and Share

It will happen this way… part II

Last week I discussed the future of “self”, and the proliferation of the “look at me” culture. To recap,

“Already there are signs that the ‘look at me’ culture is pervasive in society. Selfies, Sexting, a proliferation of personal photo and social media apps and, of course, the ubiquitous tattoo (the number of American’s with at least one tattoo is now at 45 million) are just a few of these indications.”

Then I posed the question, “What new ways will we find to stand out from the crowd?” In my future hypotheses, these could include:

Genetic design. With CrisprCas9, the basics such as skin and eye color will be entirely possible, and probably less volatile to ethical controversies than breeding for intelligence or battleground efficiency. The fashion angle, much like cosmetic surgery, will make the whole idea more palatable. Eventually, these color choices could be selected from the Pantone® library.

Tattoo II will likely take on a human augmentation future. Advancements in OLED technology, wafer-thin implants, and eventually nanotechnology could permit insertion or construction of a sub-dermal grid that displays full color, motion tattoos. The implant could grab imagery from a wearable app, hand-held device, or even The Cloud.

Transpeciation. Gradually, the idea of meddling with nature will become more acceptable. Society will begin to warm to the notion of more complicated DNA trickery. I don’t think it is a stretch to see people signing up for transpeciation: that would be tails, claws, fur, and the like.

Already we see parents actively engaged in choosing the sex of their child. In countries like India and China pregnancies are monitored to abort a fetus of the “wrong” sex. Today, gender selection is being performed in the lab. When genetic manipulation becomes more mainstream, new options will arise. For example, parents may want to save their child from rigors of sexual decision-making and choose a genetic intersexual child who would have both sex organs. I’m sure that designers will be able to figure out a non-sex option, too.

It all sounds fantastic or wildly speculative, but these things don’t happen overnight. Changes occur incrementally. Society will become more accustomed to departures from the norm and more accepting of things that were once taboo. History supports this. Technology just makes it weirder.

 

Bookmark and Share

“It will happen this way:”

 

One of my favorite scenes in cinema comes from Sidney Pollack’s Three Days of the Condor, based on Richard Condon’s novel of the same name. The film stars Robert Redford, Faye Dunaway and Max von Sydow. The movie site IMDb gives this tidy synopsis:

“A bookish CIA researcher finds all his co-workers dead, and must outwit those responsible until he figures out who he can really trust.”

The answer is probably: nobody. If you have not seen the movie, you should check it out. The premise of an all-knowing, all-powerful, intelligence agency that plays fast-and-loose with the constitution and human life is all too real even 41 years later. There is a scene near the end of the movie where the hitman Joubert (played by Sydow) tells CIA researcher Joe Turner (Redford) that he may never be safe again. The script for this film is outstanding. The character Joubert knows his profession and the people that hire him so well that he can predict the future with high confidence.

https://youtu.be/evfQ5QCRyeoY

 

In many ways, that is what futurists and those in foresight studies attempt to do. Know the people, the behaviors, and the forces in play, so well, that they can make similar predictions. My variation on this, which I have written about previously, is called logical succession. I have used this technique extensively in crafting the story and events of my graphic novel The Lightstream Chronicles.

In previous blogs, I have explained why my characters have perfect bodies and why they show them off in shrink-wrapped bodysuits that leave little to the imagination. As technology moves forward, it changes us. Selfies have been around since the invention of the camera. Before that, it was called a self-portrait. But the proliferation of the selfie, the nude selfie, and sexting, for example, are by-products of the mobile phone and social media—both are offspring of technology.

With genetic editing already within reach via CrisprCas9, the notion of a body free of disease is no longer a pipe dream. Promising research into manipulating gut hormones could mean the end of obesity. According to livescience.com:

“The endocrine system is the collection of glands that produce hormones that regulate metabolism, growth and development, tissue function, sexual function, reproduction, sleep, and mood, among other things.”

No wonder medical technology is working hard to find ways to hack into the body’s endocrine system. When these technologies become available, signing up for the perfect body will undoubtedly follow. Will these technologies also change behaviors accordingly?

Psychologists point to a combination of peer pressure, the need for approval, as well as narcissism to be behind the increase in selfie-culture but will that only increase when society has nothing to hide? Will this increase the competition to show off every enhanced detail of the human body? In my future fiction, The Lightstream Chronicles, the answer is yes.

Already there are signs that the “look at me” culture is pervasive in society. Selfies, Sexting, a proliferation of personal photo and social media apps and, of course, the ubiquitous tattoo (the number of American’s with at least one tattoo is now at 45 million) are just a few of these indications.

If this scenario plays out, what new ways will we find to stand out from the crowd? I’ll continue this next week.

Bookmark and Share

Writing a story that seemingly goes on forever. LSC update.

 

This week I wrapped up the rendering and text for the last episodes of Season 4 of The Lightstream Chronicles. Going back to the original publication calendar that I started in 2012, Chapter 4 was supposed to be 30 some pages. Over the course of production, the page count grew to more than fifty. I think that fifty episodes (pages) are a bit too many for a chapter since it takes almost a year for readers to get through a “season.” If we look at the weekly episodes in a typical TV drama, there are usually less than twenty which is far fewer than even ten years ago. So in retrospect, fifty episodes could have been spread across 2 seasons. The time that it takes to create a page, even from a pre-designed script is one of the challenges in writing and illustrating a lengthy graphic novel. Since the story began, I have had a lot of time to think about my characters, their behavior and my on futuristic prognostications. While this can be good, giving me extra time to refine, clarify or embellish the story, it can also be something of a curse as I look back and wish I had not committed a phrase or image to posterity. Perhaps they call that writer’s remorse. This conundrum also keeps things exciting as I have introduced probably a dozen new characters, scenes, and extensive backstory to the storytelling. Some people might warn that this is a recipe for disaster, but I think that the upcoming changes make the story better, more suspenseful, and engaging.

Since I have added a considerable number of pages and scenes to the beginning of Season 5, the episode count is climbing. It looks as though I going to have to add a Season 7, and possibly a Season 8 before the story finally wraps up.

Bookmark and Share
Return top

About the Envisionist

Scott Denison is an accomplished visual, brand, interior, and set designer. He is currently Assistant Professor and Foundations Coordinator for the Department of Design at The Ohio State University. He continues his research in design fiction that examines the design-culture relationship within future narratives and interventions. You can read his online graphic novel in weekly updates at http://thelightstreamchronicles.com. This blog contains commentary on all things future and often includes artist commentary on comic pages. You can find the author's professional portfolio at http://scottdenison(dot)com