Superintelligence. Is it the last invention we will ever need to make?

I believe it is crucial that we move beyond preparation to adapt or react to the future but to actively engage in shaping it.

An excellent example of this kind of thinking is Nick Bostrom’s TED talk from 2015.

Bostrom is concerned about the day when machine intelligence exceeds human intelligence (the guess is somewhere between twenty and thirty years from now). He points out that, “Once there is super-intelligence, the fate of humanity may depend on what the super-intelligence does. Think about it: Machine intelligence is the last invention that humanity will ever need to make. Machines will then be better at inventing [designing] than we are, and they’ll be doing so on digital timescales.”

His concern is legitimate. How do we control something that is smarter than we are? Anticipating AI will require more strenuous design thinking than that which produces the next viral game, app, or service. But these applications are where the lion’s share of the money is going. When it comes to keeping us from being at best irrelevant or at worst an impediment to AI, Bostrom is guardedly optimistic about how we can approach it. He thinks we could, “[…]create an A.I. that uses its intelligence to learn what we value, and its motivation system is constructed in such a way that it is motivated to pursue our values or to perform actions that it predicts we would approve of.”

At the crux of his argument and mine: “Here is the worry: Making superintelligent A.I. is a really hard challenge. Making superintelligent A.I. that is safe involves some additional challenge on top of that. The risk is that if somebody figures out how to crack the first challenge without also having cracked the additional challenge of ensuring perfect safety.”

Beyond machine learning (which has many facets), there are a wide-ranging set of technologies, from genetic engineering to drone surveillance, to next-generation robotics, and even VR, that could be racing forward without someone thinking about this “additional challenge.”

This could be an excellent opportunity for designers. But, to do that, we will have to broaden our scope to engage with science, engineering, and politics. More on that in future blogs.

Bookmark and Share

Monitoring you.

One of my students is writing a paper on the rise of baby monitors and how technology has changed what and how we monitor. In the 80s, it the baby monitor was essentially a walkie-talkie in the “on” position. About all you could do is listen to breathing in another room. Today features that link to your smartphone include Bluetooth, night vision, motion detection, cloud storage and pulse oximetry.

I started thinking about what point in a child’s life a parent might stop monitoring. Most day care centers now allow remote login for parents to watch what their toddlers are up to and as they get older and have a smartphone (for emergencies, of course, )parents also have the ability to track their location. According to 2016 study by the Pew Research Center, “[…]parents today report taking a number of steps to influence their child’s digital behavior, from checking up on what their teen is posting on social media to limiting the amount of time their child spends in front of various screens.”

Having raised kids in the digital age, to me, this makes perfect sense. There are lots of dark alleys in the digital realm that can be detrimental to young eyes. Of course, when we realized that they were going off to college, it made some sense to believe that becoming an adult meant being responsible for your behavior. Needless to say, there are a lot of painful lessons on this journey.

Some say that the world is becoming an increasingly dangerous place. Should monitoring be something we become accustomed to, even for ourselves, all the time? What types of technologies might we accept to enable this? When should it stop? When do we need to know, and about whom?

What do you think?

Bookmark and Share

Surveillance. Are we defenseless?

Recent advancements in AI that are increasing exponentially (in areas such as facial recognition) demonstrate a level of sophistication in surveillance that renders most of us indefensible. There is a new transparency, and virtually every global citizen is a potential microbe for scrutiny beneath the microscope. I was blogging about this before I ever set eyes on the CBS drama Person of Interest, but the premise that surveillance could be ubiquitous is very real. The series depicts a mega, master computer that sees everything, but the idea of gathering a networked feed of the world’s cameras and a host of other accessible devices into a central data facility where AI sorts, analyzes and learns what kind of behavior is potentially threatening, is well within reach. It isn’t even a stretch that something like it already exists.

As with most technologies, however, they do not exist in a vacuum. Technologies converge. Take, for example, a recent article in WIRED about how accurate facial recognition is becoming even when the subject is pixelated or blurred. A common tactic to obscure the identity of video witness or an innocent bystander is to blur or to pixelate their face; a favored technique of Google Maps. Just go to any big city street view and Google has systematically obscured license plates and faces. Today these methods no longer compete against state-of-the-art facial recognition systems.

The next flag is the escalating sophistication of hacker technology. One of the most common methods is malware. Through an email or website, malware can infect a computer and raise havoc. Criminals often use it to ransom a victim’s computer before removing the infection. But not all hackers are criminals, per se. The FBI is pushing for the ability to use malware to digital wiretap or otherwise infiltrate potentially thousands of computers using only a single warrant. Ironically, FBI Director James Comey recently admitted that he puts tape over the camera on his personal laptop. I wrote about this a few weeks back What does that say about the security of our laptops and devices?

Is the potential for destructive attacks on our devices is so pervasive that the only defense we have is duct tape? We can track as far back as Edward Snowden, the idea that the NSA can listen in on your phone even when it’s off. And since 2014, experts have confirmed that the technology exists. In fact, albeit sketchy, some apps purport to do exactly that. You won’t find them in the app store (for obvious reasons), but there are websites where you can click the “buy” button. According to the site, which doesn’t pass the legit news site test, (note the use of awesome) one these apps promises that you can:

• Record all phone calls made and received, hear everything being said because you can record all calls and even listen to them at a later date.
• GPS Tracking, see on a map on your computer, the exact location of the phone
• See all sites that are opened on the phone’s web browser
• Read all the messages sent and received on IM apps like Skype, Whatsapp and all the rest
• See all the passwords and logins to sites that the person uses, this is thanks to the KeyLogger feature.
• Open and close apps with the awesome “remote control” feature
• Read all SMS messages and see all photos send and received on text messages
• See all photos taken with the phone’s camera

“How it work” “ The best monitoring for protect family” — Yeah. Sketchy.

“How it work” “ The best monitoring for protect family” — Sketchy, you think?

I visited one of these sites (above) and, frankly, I would never click a button on a website that can’t form a sentence in English, and I would not recommend that you do either. Earlier this year, the UK Independent published an article where Kelli Burns, a mass communication professor at the University of South Florida, alleged that Facebook regularly listens to users phone conversations to see what people are talking about. Of course, she said she can’t be certain of that.

Nevertheless, it’s out there, and if it has not already happened eventually, some organization or government will find a way to network the access points and begin collecting information across a comprehensive matrix of data points. And, it would seem that we will have to find new forms of duct tape to attempt to manage whatever privacy we have left. I found a site that gives some helpful advice for determining whether someone is tapping your phone.

Good luck.


Bookmark and Share


In a scene from the 2007 movie Gattaca, co-star Uma Thurman steals a follicle of hair from of love-interest Ethan Hawke and takes it to the local DNA sequencing booth (presumably they’re everywhere, like McDonald’s) to find out if Hawke’s DNA is worthy of her affections. She passes the follicle in a paper thin wrapper through a pass-through window as if she were buying a ticket for a movie. The attendant asks, “You want a full sequence?” Thurman confirms, and then waits anxiously. Meanwhile, others step up to windows to submit their samples. A woman who just kissed her boyfriend has her lips swabbed and assures the attendant that the sample is only a couple of minutes old. In about a minute, Thurman receives a plastic tube with the results rolled up inside. Behind the glass, a voice says, “Nine point three. Quite a catch!”


In the futuristic society depicted in the movie, humans are either “valid” or “invalid.” Though discrimination based on your genetic profile is illegal and referred to as “genoism,” it is widely known to be a distinguishing factor in employment, promotion, and finding the right soul-mate.

Enter the story of Illumina, which I discovered by way of a FastCompany article earlier this week. Illumina is a hardware/software company. One might imagine them as the folks who make the fictitious machines behind the DNA booths in a science fiction future. Except they are already making them now. The company, which few of us have ever heard of, has 5,000 employees and more than $2 billion in annual revenues. Illumina’s products are selling like hotcakes, in both the clinical and consumer spheres.

“Startups have already entered the clinical market with applications for everything from “liquid biopsy” tests to monitor late-stage cancers (an estimated $1 billion market by 2020, according to the business consulting firm Research and Markets), to non-invasive pregnancy screenings for genetic disorders like Down Syndrome ($2.4 billion by the end of 2022).”

According to FastCo,

“Illumina has captured more than 70% of the sequencing market with these machines that it sells to academics, pharmaceutical companies, biotech companies, and more.”

You and I can do this right now. Companies like and 23andMe will work up a profile of your DNA from a little bit of saliva and sent through the mail. In a few weeks after submitting your sample, these companies will send you a plethora of reports on your carrier status (for passing on inherited conditions), ancestry reports that track your origins, wellness reports, such as your propensity to be fat or thin, and your traits like blue eyes or a unibrow. All of this costs about $200. Considering that sequencing DNA on this scale was a pipe dream ten years ago, it’s kind of a big deal. They don’t sequence everything; that requires one of Illumina’s more sophisticated machines and costs about $3,000.

If you put this technology in the context of my last post about exponential technological growth. Then it is easy to see that the price of machines, the speed of analysis, and the cost of a report is only going to come down, and faster than we think. At this point, everything will be arriving faster than we think. Here, if only to get your attention, I ring the bell. Illumina is investing in companies that bring this technology to your smartphone. With one company, Helix, “A customer might check how quickly they metabolize caffeine via an app developed by a nutrition company. Helix will sequence the customers’ genomic data and store it centrally, but the nutrition company delivers the report back to the user.” The team from Helix, “[…]that the number of people who have been sequenced will drastically increase […]that it will be 90% of people within 20 years.” (So, probably ten years is a better guess.)

According to the article, the frontier for genomics is expanding.

“What comes next is writing DNA, and not just reading it. Gene-editing tools like CRISPR-Cas9 are making it cheaper and faster to move genes around, which has untold consequences for changing the environment and treating disease.”

CRISPR can do a lot more than that.

But, as usual, all of these developments focus on the bright side, the side that saves lives and not the uncomfortable or unforeseen. There is the potential that you DNA will determine your insurance rates, or even if you get insurance. Toying around with these realms, it is not difficult to imagine that you can “Find anyone’s DNA,” like you can find anybody’s address or phone number. Maybe we see this feature incorporated into dating sites. You won’t have to steal a hair follicle from your date; it will already be online, and if they don’t publish it, certainly people will ask, “What do you have to hide?”

And then there’s the possibility that your offspring might inherit an unfavorable trait, like that unibrow or maybe Down Syndrome. So maybe those babies will never be born, or we’ll use CRISPER to make sure the nose is straight, the eyes are green, the skin is tan, and the IQ is way up there. CRISPER gene editing and splicing will be expensive, of course. Some will be able to afford it. The rest? Well, they’ll have to find a way to love their children flaws and all. So here are my questions? Will this make us more human or less human? Will our DNA become just another way to judge each other on how smart, or thin, or good looking, or talented? Is it just another way to distinguish between the haves and have-nots?

If the apps are already in design, Uma Thurman may not have long to wait.


Bookmark and Share

Transcendent Plan


One of my oft-quoted sources for future technology is Ray Kurzweil. A brilliant technologist, inventor, and futurist, Kurzweil seems to see it all very clearly, almost as though he were at the helm personally. Some of Kurzweil’s theses are crystal clear for me, such as an imminent approach toward the Singularity in a series of innocuous, ‘seemingly benign,’ steps. I also agree with his Law of Accelerating Returns1 which posits that technology advances exponentially. In a recent interview with the Silicon Valley Business Journal, he nicely illustrated that idea.

“Exponentials are quite seductive because they start out sub-linear. We sequenced one ten-thousandth of the human genome in 1990 and two ten-thousandths in 1991. Halfway through the genome project, 7 ½ years into it, we had sequenced 1 percent. People said, “This is a failure. Seven years, 1 percent. It’s going to take 700 years, just like we said.” Seven years later it was done, because 1 percent is only seven doublings from 100 percent — and it had been doubling every year. We don’t think in these exponential terms. And that exponential growth has continued since the end of the genome project. These technologies are now thousands of times more powerful than they were 13 years ago, when the genome project was completed.”

Kurzweil says the same kinds of leaps are approaching for solar power, resources, disease, and longevity. Our tendency to think linear instead of exponential means that we can deceive ourselves into believing that technologies that, ‘just aren’t there yet,’ are ‘a long way off.’ In reality, they may be right around the corner.

I’m not as solid in my affirmation of Kurzweil (and others) when it comes to some of his other predictions. Without reading too much between the lines, you can see that there is a philosophy that is helping to drive Kurzweil. Namely, he doesn’t want to die. Of course, who does? But his is a quest to deny death on a techno-transcendental level. Christianity holds that eternal life awaits the believer in Jesus Christ, other religions are satisfied that our atoms return to the greater cosmos, or that reincarnation is the next step. It would appear that Kurzweil has no time for faith. His bet on science and technology. He states,

“I think we’re very much on track to have human-level AI by 2029, which has been my consistent prediction for 20 years, and then to be able to send nanobots into the brain in the 2030s and connect our biological neocortex to synthetic neocortex in the cloud.”

In the article mentioned above, Kurzweil states that his quest to live forever is not just about the 200-plus supplements that he takes daily. He refers to this as “Bridge One.” Bridge One buys us time until technology catches up. Then “Bridge Two,” the “biotechnology revolution” takes over and radically extends our life. If all else fails, our mind will be uploaded to Cloud (which will have evolved to a synthetic neocortex), though it remains to be seen whether the sum-total of a mind also equals consciousness in some form.

For many who struggle with the idea of death, religious or not, I wonder if when we dissect it, it is not the fear of physical decrepitude that scares us, but the loss of consciousness; that unique ability of humans to comprehend their world, share language and emotions, to create and contemplate?

I would pose that it is indeed that consciousness that makes us human (along with the injustice at the thought that we feel that we might lose it. It would seem that transcendence is in order. In one scenario this transcendence comes from God, in another ‘we are as Gods.’2

So finally, I wonder whether all of these small, exponentially replicating innovations—culminating to the point where we are accessing Cloud-data only by thinking, or communicating via telepathy, or writing symphonies for eternity—will make us more or less human. If we decide that we are no happier, no more content or fulfilled, is there any going back?

Seeing as it might be right around the corner, we might want to think about these things now rather than later.


1. Kurzweil, R. (2001) The Law of Accelerating Returns, KurzweilAI . Kurzweil AI. Available at: (Accessed: October 10, 2015). 
2. Brand, Stewart. “WE ARE AS GODS.” The Whole Earth Catalog, September 1968, 1-58. Accessed May 04, 2015.
Bookmark and Share

Did we design this future?


I have held this discussion before, but a recent video from FastCompany reinvigorates a provocative aspect concerning our design future, and begs the question: Is it a future we’ve designed? It centers around the smartphone. There are a lot of cool things about our smartphones like convenience, access, connectivity, and entertainment, just to name a few. It’s hard to believe that Steve Jobs introduced the very first iPhone just nine years ago on June 29, 2007. It was an amazing device, and it’s no shocker that it took off like wildfire. According to stats site Statista, “For 2016, the number of smartphone users is forecast to reach 2.08 billion.” Indeed, we can say, they are everywhere. In the world of design futures, the smartphone becomes Exhibit A of how an evolutionary design change can spawn a complex system.

Most notably, there are the billions of apps that are available to users that promise a better way to calculate tips, listen to music, sleep, drive, search, exercise, meditate, or create. Hence, there is a gigantic network of people who make their living supplying user services. These are direct benefits to society and commerce. No doubt, our devices have also often saved us countless hours of analog work, enabled us to manage our arrivals and departures, and keep in contact (however tenuous) with our friends and acquaintances. Smartphones have helped us find people in distress and help us locate persons with evil intent. But, there are also unintended consequences, like legislation to keep us from texting and driving because these actions have also taken lives. There are issues with dependency and links to sleep disorders. Some lament the deterioration of human, one-on-one, face-to-face, dialog and the distracted conversations at dinner or lunch. There are behavioral disorders, too. Since 2010 there has been a Smartphone Addiction Rating Scale (SARS) and the Young Internet Addiction Scale (YIAS). Overuse of mobile phones has prompted dozens of studies into adolescents as well as adults, and there are links to increased levels of ADHD, and a variety of psychological disorders including stress and depression.

So, while we rely on our phones for all the cool things they enable us to do we are—in less than ten years—experiencing a host of unintended consequences. One of these is privacy. Whether Apple or another brand, the intricacies of smartphone technology are substantially the same. This video shows why your phone is so easy to hack, to activate your phone’s microphone, camera, access your contact list or track your location. And, with the right tools, it is frighteningly simple. What struck me most after watching the video was not how much we are at risk of being hacked, eavesdropped on, or perniciously viewed, but the comments from a woman on the street. She said, “I don’t have anything to hide.” It is not the first millennial that I have heard say this. And that is what, perhaps, bothers me most—our adaptability based on the slow incremental erosion of what used to be our private space.

We can’t rest responsibility entirely on the smartphone. We have to include the idea of social media going back to the days of (amusingly) MySpace. Sharing yourself with a group of close friends gradually gave way to the knowledge that the photo or info may also get passed along to complete strangers. It wasn’t, perhaps your original intention, but, oh well, it’s too late now. Maybe that’s when we decided that we had better get used to sharing our space, our photos (compromising or otherwise), our preferences, our adventures and misadventures with outsiders, even if they were creeps trolling for juicy tidbits. As we chalked up that seemingly benign modification of our behavior to adaptability, the first curtain fell. If someone is going to watch me, and there’s nothing I can do about it, then I may as well get used to it. We adjusted as a defense mechanism. Paranoia was the alternative, and no one wants to think of themselves as paranoid.

A few weeks ago, I posted an image of Mark Zuckerberg’s laptop with tape over the camera and microphone. Maybe he’s more concerned with privacy since his world is full of proprietary information. But, as we become more accustomed to being both constantly connected and potentially tracked or watched, when will the next curtain fall? If design is about planning, directing or focusing, then the absence of design would be ignoring, neglecting or turning away. I return to the first question in this post: Did we design this future? If not, what did we expect?

Bookmark and Share

Defining [my] design fiction.


It’s tough to define something that still so new that, in practice, there is no prescribed method and dozens of interpretations. I met some designers at a recent conference in Trento, Italy that insist they invented the term in 1995, but most authorities attribute the origin to Bruce Sterling in his 2005 book, Shaping Things. The book was not about design fiction per se. Sterling’s is fond of creating neologisms, and this was one of those (like the term ‘spime’) that appeared in that book. It caught on. Sometime later Sterling sought to clarify it. And his most quoted definition is, “The deliberate use of diegetic prototypes to suspend disbelief about change.” If you rattle that off to most people, they look at you glassy-eyed. Fortunately, in 2013, Sterling went into more detail.

“Deliberate use’ means that design fiction is something that people do with a purpose. ‘Diegetic’ is from film and theatre studies. A movie has a story, but it also has all the commentary, scene-setting, props, sets and gizmos to support that story. Design fiction doesn’t tell stories — instead, it designs prototypes that imply a changed world. Suspending disbelief’ means that design fiction has an ethics. Design fictions are fakes of a theatrical sort, but they’re not wicked frauds or hoaxes intended to rob or fool people. A design fiction is a creative act that puts the viewer into a different conceptual space — for a while. Then it lets him go. Design fiction has an audience, not victims. Finally, there’s the part about ‘change’. Awareness of change is what distinguishes design fictions from jokes about technology, such as over-complex Heath Robinson machines or Japanese chindogu (‘weird tool’) objects. Design fiction attacks the status quo and suggests clear ways in which life might become different.” (Sterling, 2013)

The above definition is the one on which I base most of my research. I’ve written on this before, such as what distinguishes it from science fiction, but I bring this up today because I frequently run into things that are not design fiction but are labeled thus. There are three non-negotiables for me. We’re talking about change, a critical eye on change and suspending disbelief.

Part of the intent of design fiction is to get you to think about change. Things are going to change. It implies a future. I suppose it doesn’t mean that the fiction itself has to take place in the future, however, since we can’t go back in time, the only kind of change we’re going to encounter is the future variety. So, if the intent is to make us think, that thinking should have some redeeming benefit on the present to make us better prepared for the future. Such as, “Wow. That future sucks. Let’s not let that happen.” Or, “Careful with that future scenario, it could easily go awry.” Like that.

A critical eye on change.
There are probably a lot of practitioners who would disagree with me on this point. The human race has a proclivity for messing things up. We develop things often in advance of actually thinking about what they might mean for society, or an economy, or our health, our environment, or our behavior. We design way too much stuff just because we can and because it might make us rich if we do. We need to think more before we act. It means we need to take some responsibility for what we design. Looking into the future with a critical eye on how things could go wrong or just on how wrong they might be without us noticing is a crucial element in my interpretation of intent.

Suspending disbelief
As Sterling says, the objective here is not to fool you but to get close enough to a realistic scenario that you accept that it could happen. If it’s off-the-wall, WTF, conceptual art, absent of any plausible existence, or sheer fantasy, it misses the point. I’m sure there’s a place for those and no doubt a purpose, but call it something else, but not design fiction. It’s the same reason that Star Wars is not design fiction. There’s design and there’s fiction but different intent.

I didn’t intend to have this turn into a rant, and this may all seem to you like splitting hairs, but often these subtle differences are important so that we know what were studying and why.

The nice thing about blogs is that if you have a different opinion, you can share.


Sterling, B., 2013. Design Fiction: “Patently Untrue” by Bruce Sterling [WWW Document]. WIRED. URL (accessed 12.12.14).
Bookmark and Share

Privacy or paranoia?


If you’ve been a follower of this blog for a while, then you know that I am something of a privacy wonk. I’ve written about it before (about a dozen times, such as), and I’ve even built a research project (that you can enact yourself) around it. A couple of things transpired this week to remind me that privacy is tenuous. (It could also be all the back episodes of Person of Interest that I’ve been watching lately or David Staley’s post last April about the Future of Privacy.) First, I received an email from a friend this week alerting me to a little presumption that my software is spying on me. I’m old enough to remember when you purchased software on as a set of CDs (or even disks). You loaded in on your computer, and it seemed to last for years before you needed to upgrade. Let’s face it, most of us use only a small subset of the features in our favorite applications. I remember using Photoshop 5 for quite awhile before upgrading and the same with the rest of what is now called the Adobe Creative Suite. I still use the primary features of Photoshop 5, Illustrator 10 and InDesign (ver. whatever), 90% of the time. In my opinion, the add-ons to those apps have just slowed things down, and of course, the expense has skyrocketed. Gone are the days when you could upgrade your software every couple of years. Now you have to subscribe at a clip of about $300 a year for the Adobe Creative Suite. Apparently, the old business model was not profitable enough. But then came the Adobe Creative Cloud. (Sound of an angelic chorus.) Now it takes my laptop about 8 minutes to load into the cloud and boot up my software. Plus, it stores stuff. I don’t need it to store stuff for me. I have backup drives and archive software to do that for me.

Back to the privacy discussion. My friend’s email alerted me to this little tidbit hidden inside the Creative Cloud Account Manager.

Learn elsewhere, please.

Learn elsewhere, please.

Under the Security and Privacy tab, there are a couple of options. The first is Desktop App Usage. Here, you can turn this on or off. If it’s on, one of the things it tracks is,

“Adobe feature usage information, such as menu options or buttons selected.”

That means it tracks your keystrokes. Possibly this only occurs when you are using that particular app, but uh-uh, no thanks. Switch that off. Next up is a more pernicious option; it’s called, Machine Learning. Hmm. We all know what that is and I’ver written about that before, too. Just do a search. Here, Adobe says,

“Adobe uses machine learning technologies, such as content analysis and pattern recognition, to improve our products and services. If you prefer that Adobe not analyze your files to improve our products and services, you can opt-out of machine learning at any time.”

Hey, Adobe, if you want to know how to improve your products and services, how about you ask me, or better yet, pay me to consult. A deeper dive into ‘machine learning’ tells me more. Here are a couple of quotes:

“Adobe uses machine learning technologies… For example, features such as Content-Aware Fill in Photoshop and facial recognition in Lightroom could be refined using machine learning.”

“For example, we may use pattern recognition on your photographs to identify all images of dogs and auto-tag them for you. If you select one of those photographs and indicate that it does not include a dog, we use that information to get better at identifying images of dogs.”

Facial recognition? Nope. Help me find dog pictures? Thanks, but I think I can find them myself.

I know how this works. The more data that the machine can feed on the better it becomes at learning. I would just rather Adobe get their data by searching it out for it themselves. I’m sure they’ll be okay. (Afterall there’s a few million people who never look at their account settings.) Also, keep in mind, it’s their machine not mine.

The last item on my privacy rant just validated my paranoia. I ran across this picture of Facebook billionaire Mark Zuckerberg hamming it up for his FB page.

zuckerberg copy


In the background, is his personal laptop. Upon closer inspection, we see that Zuck has a piece of duct tape covering his laptop cam and his dual microphones on the side. He knows.



Go get a piece of tape.

Bookmark and Share

News from a speculative future.

I  thought I’d take a creative leap today with a bit of future fiction. Here is a short, speculative future news article based on a plausible trajectory of science, tech and social policy.

18 November 2018
The Senate passed a bill today that gives the green light to Omzin Corp. and the nation’s largest insurer, the federal government, to require digital monitoring of health care recipients. Omzin Corp. is the manufacturer of a microscopic additive that is slated to become part of all prescription drugs. When ingested, the additive reacts with the stomach acids to transmit a signal to the insurer verifying that patients have taken their medication. A spokesperson for the federal government’s Universal Enrollment Plan (UEP), said that the Omzin additive was approved by the FDA as completely safe and would be a significant benefit to patients who have difficulty keeping track of complicated lists of medications and prescription requirements. The patient then naturally eliminates the tiny additive. A new ACAapp allows health care recipients to track on their mobile phone which drugs they have taken and when. The information is also transmitted to the health care provider. Opponents of the bill state that the legislation is a violation of privacy and tantamount to forced medication of the population. “Some people can avoid drugs through improved diet and exercise, but this bill pre-empts patients who would like to pursue that option,” according to Hub Garner a representative from citizens group MedChoice. “Patients who opt-in to the program will pay less for their insurance than those who defy the order,” said Senate minority leader

Some definite possibilities here for future artifacts, i.e., design fictions. Remember, design fictions are provocations of possible futures for the purposes of discussion and debate. Plausible or outrageous? What do you think?

Bookmark and Share

Step inside The Lightstream Chronicles

Some time ago I promised to step inside one of the scenes from The Lighstream Chronicles. Today, to commemorate the debut of Season 5—that goes live today—I’m going to deliver on that promise, partially.



The notion started after giving my students a tour of the Advanced Computing Center for Arts and Design (ACCAD)s motion-capture lab. We were a discussing VR, and sadly, despite all the recent hype, very few of us—including me—had never experienced state-of-the-art Virtual Reality. In that tour, it occurred to me that through the past five years of continuous work on my graphic novel, a story built entirely in CG, I have a trove of scenes and scenarios that I could in effect step into. Of course, it is not that simple, as I have discovered this summer working with ACCADs animation specialist Vita Berezina-Blackburn. It turns out that my extreme high-resolution images are not ideally compatible with the Oculus pipeline.

The idea was, at first, a curiosity for me, but it became quickly apparent that there was another level of synergy with my work in guerrilla futures, a flavor of design fiction.

Design fiction, my focus of study, centers on the idea that, through prototypes and future narratives we can engage people in thinking about possible futures, discuss and debate them and instill the idea of individual agency in shaping them. Unfortunately, too much design fiction ends up in the theoretical realm within the confines of the art gallery, academic conferences or workshops. The instances are few where the general public receives a future experience to contemplate and consider. Indeed, it has been something of a lament for me that my work in future fiction through the graphic novel, can be experienced as pure entertainment without acknowledging the deeper issues of its socio-techno themes. At the core of experiential design fiction introduced by Stewart Candy (2010) is the notion that future fiction can be inserted into everyday life whether the recipient has asked for them or not. The technique is one method of making the future real enough for us to ask whether this is the future we want and if not what might we do about it now.

Through my recent meanderings with VR, I see that this idea of immersive futures could be an incredibly powerful method of infusing these experiences.

The scene from Season 1 that I selected for this test.

The scene from Season 1 that I selected for this test.


About the video
This video is a test. We had no idea what we would get after I stripped down a scene from Season 1. Then we had a couple of weeks of trial and error re-making my files to be compatible with the system. Since one of the things that separate The Lightstream Chronicles from your average graphic novel/webcomic is the fact that you can zoom in 5x to inspect every detail, it is not uncommon, for example for me to have more than two hundred 4K textures in any given scene. It also allows me as the “director” to change it up and dolly in or out to focus on a character or object within a scene without a resulting loss in resolution. To me, it’s one of the drawbacks in many video games of getting in and inspecting a resident artifact. They usually start to “break up” into pixels the closer you get. However, in a real-time environment, you have to make concessions, at least for now, to make your textures render faster.

For this test, we didn’t apply all two hundred textures, just some essentials. For example the cordial glasses, the liquid in the bottle and the array of floating transparent files that hover over Techman’s desk. We did apply the key texture that defines the environment and that is the rusty, perforated metal wall that encloses Techman’s “safe-room” and protects it from eavesdropping. There are lots of other little glitches beyond unassigned textures, such as intersecting polygons and dozens of lighting tweaks that make this far from prime time.

In the average VR game, you move your controller forward through space while you are either seated or standing. Either way, in most cases you are stationary. What distinguishes this from most VR experiences is that I can physically walk through the scene.In this test, we were in the ACCAD motion capture lab.

Wearing the Oculus in the MoCap lab.

Wearing the Oculus in the MoCap lab while Lakshika manages the tether.

I’m sure you have seen pictures of this sort of thing before where characters strap on sensors to “capture their motions” and translate them to virtual CG characters. This was the space in which I was working. It has boundaries, however. So I had to obtain those boundaries, in scale to my scene so that I could be sure that the room and the characters were within the area of the lab. Dozens of tracking devices around the lab read sensors on the Oculus headset and ensure that once I strap it on, I can move freely within the limits of virtual space, and it would relate my movements to the context of the virtual scene.

Next week I’ll be going back into the lab with a new scene and take a look at Kristin Broulliard and Keiji in their exchange from episode 97 (page) Season 3.

Next time.

Next time.

Respond, reply, comment. Enjoy.


Bookmark and Share
Return top

About the Envisionist

Scott Denison is an accomplished visual, brand, interior, and set designer. He is currently Assistant Professor and Foundations Coordinator for the Department of Design at The Ohio State University. He continues his research in design fiction that examines the design-culture relationship within future narratives and interventions. You can read his online graphic novel in weekly updates at This blog contains commentary on all things future and often includes artist commentary on comic pages. You can find the author's professional portfolio at http://scottdenison(dot)com