Tag Archives: VR

Step inside The Lightstream Chronicles

Some time ago I promised to step inside one of the scenes from The Lighstream Chronicles. Today, to commemorate the debut of Season 5—that goes live today—I’m going to deliver on that promise, partially.

 

Background

The notion started after giving my students a tour of the Advanced Computing Center for Arts and Design (ACCAD)s motion-capture lab. We were a discussing VR, and sadly, despite all the recent hype, very few of us—including me—had never experienced state-of-the-art Virtual Reality. In that tour, it occurred to me that through the past five years of continuous work on my graphic novel, a story built entirely in CG, I have a trove of scenes and scenarios that I could in effect step into. Of course, it is not that simple, as I have discovered this summer working with ACCADs animation specialist Vita Berezina-Blackburn. It turns out that my extreme high-resolution images are not ideally compatible with the Oculus pipeline.

The idea was, at first, a curiosity for me, but it became quickly apparent that there was another level of synergy with my work in guerrilla futures, a flavor of design fiction.

Design fiction, my focus of study, centers on the idea that, through prototypes and future narratives we can engage people in thinking about possible futures, discuss and debate them and instill the idea of individual agency in shaping them. Unfortunately, too much design fiction ends up in the theoretical realm within the confines of the art gallery, academic conferences or workshops. The instances are few where the general public receives a future experience to contemplate and consider. Indeed, it has been something of a lament for me that my work in future fiction through the graphic novel, can be experienced as pure entertainment without acknowledging the deeper issues of its socio-techno themes. At the core of experiential design fiction introduced by Stewart Candy (2010) is the notion that future fiction can be inserted into everyday life whether the recipient has asked for them or not. The technique is one method of making the future real enough for us to ask whether this is the future we want and if not what might we do about it now.

Through my recent meanderings with VR, I see that this idea of immersive futures could be an incredibly powerful method of infusing these experiences.

The scene from Season 1 that I selected for this test.
The scene from Season 1 that I selected for this test.

 

About the video
This video is a test. We had no idea what we would get after I stripped down a scene from Season 1. Then we had a couple of weeks of trial and error re-making my files to be compatible with the system. Since one of the things that separate The Lightstream Chronicles from your average graphic novel/webcomic is the fact that you can zoom in 5x to inspect every detail, it is not uncommon, for example for me to have more than two hundred 4K textures in any given scene. It also allows me as the “director” to change it up and dolly in or out to focus on a character or object within a scene without a resulting loss in resolution. To me, it’s one of the drawbacks in many video games of getting in and inspecting a resident artifact. They usually start to “break up” into pixels the closer you get. However, in a real-time environment, you have to make concessions, at least for now, to make your textures render faster.

For this test, we didn’t apply all two hundred textures, just some essentials. For example the cordial glasses, the liquid in the bottle and the array of floating transparent files that hover over Techman’s desk. We did apply the key texture that defines the environment and that is the rusty, perforated metal wall that encloses Techman’s “safe-room” and protects it from eavesdropping. There are lots of other little glitches beyond unassigned textures, such as intersecting polygons and dozens of lighting tweaks that make this far from prime time.

In the average VR game, you move your controller forward through space while you are either seated or standing. Either way, in most cases you are stationary. What distinguishes this from most VR experiences is that I can physically walk through the scene.In this test, we were in the ACCAD motion capture lab.

Wearing the Oculus in the MoCap lab.
Wearing the Oculus in the MoCap lab while Lakshika manages the tether.

I’m sure you have seen pictures of this sort of thing before where characters strap on sensors to “capture their motions” and translate them to virtual CG characters. This was the space in which I was working. It has boundaries, however. So I had to obtain those boundaries, in scale to my scene so that I could be sure that the room and the characters were within the area of the lab. Dozens of tracking devices around the lab read sensors on the Oculus headset and ensure that once I strap it on, I can move freely within the limits of virtual space, and it would relate my movements to the context of the virtual scene.

Next week I’ll be going back into the lab with a new scene and take a look at Kristin Broulliard and Keiji in their exchange from episode 97 (page) Season 3.

Next time.
Next time.

Respond, reply, comment. Enjoy.

 

Bookmark and Share

Adapt or plan? Where do we go from here?

I just returned from Nottingham, UK where I presented a paper for Cumulus 16, In This Place. The paper was entitled Design Fiction: A Countermeasure For Technology Surprise. An Undergraduate Proposal. My argument hinged on the idea that students needed to start thinking about our technosocial future. Design fiction is my area of research, but if you were inclined to do so, you could probably choose a variant methodology to provoke discussion and debate about the future of design, what designers do, and their responsibility as creators of culture. In January, I had the opportunity to take an initial pass at such a class. The experiment was a different twist on a collaborative studio where students from the three traditional design specialties worked together on a defined problem. The emphasis was on collaboration rather than the outcome. Some students embraced this while others pushed back. The push-back came from students fixated on building a portfolio of “things” or “spaces” or “visual communications“ so that they could impress prospective employers. I can’t blame them for that. As educators, we have hammered the old paradigm of getting a job at Apple or Google, or (fill in the blank) as the ultimate goal of undergraduate education. But the paradigm is changing and the model of a designer as the maker of “stuff” is wearing thin.

A great little polemic from Cameron Tonkinwise recently appeared that helped to articulate this issue. He points the finger at interaction design scholars and asks why they are not writing about or critiquing “the current developments in the world of tech.” He wonders whether anyone is paying attention. As designers and computer scientists we are feeding a pipeline of more apps with minimal viability, with seemingly no regard for the consequences on social systems, and (one of my personal favorites) the behaviors we engender through our designs.

I tell my students that it is important to think about the future. The usual response is, “We do!” When I drill deeper, I find that their thoughts revolve around getting a job, making a living, finding a home, and a partner. They rarely include global warming, economic upheavals, feeding the world, natural disasters, etc. Why? These issues they view as beyond their control. We do not choose these things; they happen to us. Nevertheless, these are precisely the predicaments that need designers. I would argue these concerns are far more important than another app to count my calories or select the location for my next sandwich.

There is a host of others like Tonkinwise that see that design needs to refocus, but often it seems like there are are a greater number that blindly plod forward unaware of the futures they are creating. I’m not talking about refocusing designers to be better at business or programming languages; I’m talking about making designers more responsible for what they design. And like Tonkinwise, I agree that it needs to start with design educators.

Bookmark and Share

The nature of the unpredictable.

 

Following up on last week’s post, I confessed some concern about technologies that progress too quickly and combine unpredictably.

Stewart Brand introduced the 1968 Whole Earth Catalog with, “We are as gods and might as well get good at it.”1 Thirty-two years later, he wrote that new technologies such as computers, biotechnology and nanotechnology are self-accelerating, that they differ from older, “stable, predictable and reliable,” technologies such as television and the automobile. Brand states that new technologies “…create conditions that are unstable, unpredictable and unreliable…. We can understand natural biology, subtle as it is because it holds still. But how will we ever be able to understand quantum computing or nanotechnology if its subtlety keeps accelerating away from us?”2. If we combine Brand’s concern with Kurzweil’s Law of Accelerating Returns and the current supporting evidence exponentially, as the evidence supports, will it be as Brand suggests unpredictable?

Last week I discussed an article from WIRED Magazine on the VR/MR company Magic Leap. The author writes,

“Even if you’ve never tried virtual reality, you probably possess a vivid expectation of what it will be like. It’s the Matrix, a reality of such convincing verisimilitude that you can’t tell if it’s fake. It will be the Metaverse in Neal Stephenson’s rollicking 1992 novel, Snow Crash, an urban reality so enticing that some people never leave it.”

And it will be. It is, as I said last week, entirely logical to expect it.

We race toward these technologies with visions of mind-blowing experiences or life-changing cures, and usually, we imagine only the upside. We all too often forget the human factor. Let’s look at some other inevitable technological developments.
• Affordable DNA testing will tell you your risk of inheriting a disease or debilitating condition.
• You can ingest a pill that tells your doctor, or you in case you forgot, that you took your medicine.
• Soon we will have life-like robotic companions.
• Virtual reality is affordable, amazingly real and completely user-friendly.

These are simple scenarios because they will likely have aspects that make them even more impressive, more accessible and more profoundly useful. And like most technological developments, they will also become mundane and expected. But along with them come the possibility of a whole host of unintended consequences. Here are a few.
• The government’s universal healthcare requires that citizens have a DNA test before you qualify.
• It monitors whether you’ve taken your medication and issues a fine if you don’t, even if you don’t want your medicine.
• A robotic, life-like companion can provide support and encouragement, but it could also be your outlet for violent behavior or abuse.
• The virtual world is so captivating and pleasurable that you don’t want to leave, or it gets to the point where it is addicting.

It seems as though whenever we involve human nature, we set ourselves up for unintended consequences. Perhaps it is not the nature of technology to be unpredictable; it is us.

1. Brand, Stewart. “WE ARE AS GODS.” The Whole Earth Catalog, September 1968, 1-58. Accessed May 04, 2015. http://www.wholeearth.com/issue/1010/article/195/we.are.as.gods.
2. Brand, Stewart. “Is Technology Moving Too Fast? Self-Accelerating Technologies-Computers That Make Faster Computers, For Example – May Have a Destabilizing Effect on .Society.” TIME, 2000
Bookmark and Share

Powerful infant.

In previous blogs (such as this one), I have discussed the subject of virtual reality. Yesterday, I tried it. The motivation for my visit to The Advanced Computing Center for the Arts and Design (ACCAD), Ohio State’s cutting-edge technology and arts center, was a field trip for my junior Collaborative Studio design students. Their project this semester is to design a future system that uses emerging technologies. It is hard to imagine that in the near-future VR will be commonplace. We stepped inside the a large, empty performance stage rigged with a dozen motion capture cameras that could track your movements throughout virtual space. We looked at an experimental animation in which we could stand amidst the characters and another work-in-progress that allowed us to step inside a painting. It wasn’t my first time in a Google cardboard device where I could look around at a 360-degree world (sensed by my phone’s gyroscope), but on an empty stage where you could walk amongst virtual characters, the experience took on a new dimension—literally. I found myself concerned about bumping into things that weren’t there and even getting a bit dizzy. (I did not let on in front of my students).

I immediately saw an application for The Lightstream Chronicles and realized that I could load up one of my scenes from the graphic novel, bring it over to ACCAD’s mocap studio and step into this virtual world that I have created. I build all of my scenes (including architecture) to scale, furnish the rooms and interiors and provide for full 360º viewing. Building sets this way allows me to revisit them at any time, follow my characters around or move the camera to get a better angle without having to add walls that I might not have anticipated using. After the demo, I was pretty excited. It became apparent that this technology will enable me to see what my characters see, and stand beside them. It’s a bit mind-blowing. Now the question becomes which scene to use. Any ideas?

Clearly VR is in its infancy, but it is a very powerful infant. The future seems exciting, and I can see why people can get caught up in what the promises could be. Of course, I have to be the one to wonder at what this powerful infant will grow up to be.

Bookmark and Share

A facebook of a different color.

The tech site Ars Technica recently ran an article on the proliferation of a little-known app called Facewatch. According to the articles writer Sebastian Anthony, “Facewatch is a system that lets retailers, publicans, and restaurateurs easily share private CCTV footage with the police and other Facewatch users. In theory, Facewatch lets you easily report shoplifters to the police, and to share the faces of generally unpleasant clients, drunks, etc. with other Facewatch users.” The idea is that retailers or officials can look out for these folks and either keep an eye on them or just ask them to leave. The system, in use in the UK, appears to have a high rate of success.

 

The story continues. Of course, all technologies eventually converge, so now you don’t have to “keep and eye out” for ner-do-wells your CCTV can do it for you. NeoFace from NEC works with the Facewatch list to do the scouting for you. According to NECs website: “NEC’s NeoFace Watch solution is specifically designed to integrate with existing surveillance systems by extracting faces in real time… and matching against a watch list of individuals.” In this case, it would be the Facewatch database. Ars’ Anthony, makes this connection: “In the film Minority Report, people are rounded up by the Precrime police agency before they actually commit the crime…with Facewatch, and you pretty much have the same thing: a system that automatically tars people with a criminal brush, irrespective of dozens of important variables.”

Anthony points out that,

“Facewatch lets you share ‘subjects of interest’ with other Facewatch users even if they haven’t been convicted. If you look at the shop owner in a funny way, or ask for the service charge to be removed from your bill, you might find yourself added to the ‘subject of interest’ list.”

The odds of an innocent being added to the watchlist are quite good. Malicious behavior aside, you could be logged as you wander past a government protest, forget your PIN number too many times at the ATM, or simply look too creepy in your Ray Bans and hoody.

The story underscores a couple of my past rants. First, we don’t make laws to protect against things that are impossible, so when the impossible happens, we shouldn’t be surprised that there isn’t a law to protect against it.1 It is another red flag that technology is moving, too fast and as it converges with other technologies it becomes radically unpredictable. Second, that technology moves faster than politics, moves faster than policy, and often faster than ethics.2

There are a host personal apps, many which are available to our iPhones or Androids that are on the precarious line between legal and illegal, curious and invasive. And there are more to come.

 

1 Quoting Selinger from Wood, David. “The Naked Future — A World That Anticipates Your Every Move.” YouTube. YouTube, 15 Dec. 2013. Web. 13 Mar. 2014.
2. Quoting Richards from Farivar, Cyrus. “DOJ Calls for Drone Privacy Policy 7 Years after FBI’s First Drone Launched.” Ars Technica. September 27, 2013. Accessed March 13, 2014. http://arstechnica.com/tech-policy/2013/09/doj-calls-for-drone-privacy-policy-7-years-after-fbis-first-drone-launched/.
Bookmark and Share

The foreseeable future.

From my perspective, the two most disruptive technologies of the next ten years will be a couple of acronyms: VR and AI. Virtual Reality will transform the way people learn, and their diversions. It will play an increasing role in entertainment and gaming to the extent that many will experience some confusion and conflict with actual reality. Make sure you see last week’s blog for more on this. Between VR and AI so much is happening that these could easily outnumber a host of other topics to discuss on this site next year. Today, I’ll begin the discussion with AI, but both technologies fall into my broader topic of the foreseeable future.

One of my favorite quotes of 2014 (seems like ancient history now) was from an article in Ars Technica by Cyrus Farivar 1. It was a drone story about FBI proliferation to the tune of $5 million that occurred gradually over the period of 10 years, almost unnoticed. Farivar cites a striking quote from Neil Richards, a law professor at Washington University in St. Louis: “We don’t write laws to protect against impossible things, so when the impossible becomes possible, we shouldn’t be surprised that the law doesn’t protect against it…” I love that quote because we are continually surprised that we did not anticipate one thing or the other. Much of this surprise I believe, comes from experts who tell us that this or that won’t happen in the foreseeable future. One of these experts, Miles Brundage, a Ph.D. student at Arizona State, was quoted recently in an article in WIRED. About AI that could surpass human intelligence, Brundage said,

“At the point where we are today, no AI system is at all capable of taking over the world—and won’t be for the foreseeable future.”

There are two things that strike me about these kinds of statements. First is the obvious fact that no one can see the future in the first place, and secondly that the clear implication is, that it will happen, just not yet. It also suggests that we shouldn’t be concerned; it’s too far away. This article was about Elon Musk is open-sourcing something called OpenAI. According to Nathaniel Wood reporting for WIRED, OpenAI is deep-learning code that Musk and his investors want to share with the world, for free. This news comes on the heels of Google’s open-sourcing of their AI code called TensorFlow, immediately followed by a Facebook announcement that they would be sharing their BigSur server hardware. As the article points out, this is not all magnanimous altruism. By opening the door to formerly proprietary software or hardware folks like Musk and companies like Google and Facebook stand to gain. They gain by recruiting talent, and by exponentially increasing development through free outsourcing. A thousand people working with your code are much better than the hundreds inside your building. Here are two very important factors that folks like Brundage don’t take into consideration. First, these people are in a race and, through outsourcing or open-sourcing their stuff they are enlisting people to help them in the race. Secondly, there is that term, exponential. I use it most often when I refer to Kurzweil’s Law of Accelerating Returns. It is exactly these kinds of developments that make his prediction so believable. So maybe the foreseeable future is not that far away after all.

All this being said the future is not foreseeable, and the exponential growth in areas like VR and AI will continue. The WIRED article continues with this commentary on AI, (which we all know):

“Deep learning relies on what are called neural networks, vast networks of software and hardware that approximate the web of neurons in the human brain. Feed enough photos of a cat into a neural net, and it can learn to recognize a cat. Feed it enough human dialogue, and it can learn to carry on a conversation. Feed it enough data on what cars encounter while driving down the road and how drivers react, and it can learn to drive.”

Despite their benevolence, this is why Musk and Facebook and Google are in the race. Musk is quick to add that while his motives have an air of transparency to them, it is also true that the more people who have access to deep-learning software, the less likely that one guy will have a monopoly on it.

Musk is a smart guy. He knows that AI could be a blessing or a curse. Open sourcing is his hedge. It could be a good thing… for the foreseeable future.

 

1. Farivar, Cyrus. “DOJ Calls for Drone Privacy Policy 7 Years after FBI’s First Drone Launched.” Ars Technica. September 27, 2013. Accessed March 13, 2014. http://arstechnica.com/tech-policy/2013/09/doj-calls-for-drone-privacy-policy-7-years-after-fbis-first-drone-launched/.
Bookmark and Share

Harmless.

 

Once again, it has been a week where it is difficult to decide what present-future I should talk about. If you are a follower of The Lightstream Chronicles, then you know I am trying to write about more than science fiction. The story is indeed a cyberpunk-ish, crime-thriller, drama intended to entertain, but it is also a means of scrutinizing a future where all the problems we imagine that technology will solve often create new ones, subtle ones that end up re-engineering us. Many of these technologies start out a curiosities, entertainments, or diversions that are picked-up by early-adopting technophiles and end up, gradually in the mainstream.

One of these curiosities is the idea of wearable tech. Wristbands watches and other monitors are designed to keep track of what we do, remind us to do something, or now in increasing popularity, remind us not to do something. One company, Chaotic Moon is working on a series of tattoo-like monitors. These are temporary, press-on circuits that use the conductivity of your skin to help them work and transmit. They are called Tech Tats and self-classified as bio-wearables. In addition to their functional properties, they also have an aesthetic objective—a kind of tattoo. Still somewhat primitive (technologically and artistically) they, nevertheless, fall into this category of harmless diversions.

techtats
Monitoring little Susi’s temperature.

Of course, Chaotic Moon is hoping (watch the video) that they will become progressively more sophisticated, and their popularity will grow from both  as both tech and fashion. Perhaps they should be called bio-fashion. If no one has already claimed this, then you saw it here first, folks. If you watch the video from Chaotic Moon you’ll see this promise that these things (in a future iteration) will be used for transactions and should be considered safer than carrying around lots of credit cards. By the way, thieves are already hacking the little chip in your credit card that is supposed to be so much safer than the old non-chipped version. Sorry, I digress.

My brand of design fiction looks at these harmless diversions and asks, “What next?”, and “What if?”. I think most futurists agree that these kinds of implants will eventually move inside the body through simple injections or, in future versions, constructed inside via nanobots. Under my scrutiny, two interesting things are at work here. First there is the idea of wearing and then implanting technology which clearly brings us across a transhuman threshold, and the idea of fashion as the subtle carrier of harmlessness and adoptive lure. You can probably imagine where I’m going with that.

Next up is VR. Virtual reality is something I blog about fairly often. In The Lightstream Chronicles, it has reached a level of sophistication that surpasses game controllers boxes and hardware. You simply dial in your neocortex to the Lightstream, (the future Internet) and you are literally wherever you want to be and doing whatever your imagination can conjure up.  In the story, I more or less predict that this total immersion becomes seriously addictive. Check out the prologue episodes to Season 4.

Thanks to one of my students for pointing out this video called the Uncanny Valley.

“I feel like I can be myself and not go to jail for it.”
“I feel like I can be myself and not go to jail for it.”

You can watch it on Vimeo. Chat up the possible idea of any detrimental effects of video games with a gamer and you’ll almost certainly hear the word harmless.

These are the design futures that I think about. What do you think?

Bookmark and Share