Tag Archives: guerrilla futures

Of Threatcasting

Until a Google alert came through my email this week, I have to admit, I had never heard the term threatcasting. I clicked-in to an article in Slate that gave me the overview, and when I discovered that threatcasting is a blood-relative to guerrilla futures, I was more than intrigued. First, let’s bring you up to speed on threatcasting and then I will remind my readers about this guerrilla business.

The Slate article was written by futurist Brian David Johnson, formerly of Intel and now in residence at Arizona State University, and Natalie Vanatta a U.S. Army Cyber officer with a Ph.D. in applied mathematics currently researching in a military think tank. These folks are in the loop, and kudos to ASU for being a leader in bringing thought leaders, creators and technologists together to look at the future. According to the article, threatcasting is “… a conceptual process used to envision and plan for risks 10 years in the future.” If you know what my research focus is, then you know we are already on the same page. The two writers work with “Arizona State University’s Threatcasting Lab, whose mission is to use threatcasting to envision futures that empower actions.” The lab creates future scenarios that bring together “… experts in social science, technology, economics, cultural history, and other fields.” Their future scenarios have inspired companies like CISCO, through the Cisco Hyperinnovation Living Labs (CHILL), to create a two-day summit to look at countermeasures for threats to the Internet of Things. They also work with the “… U.S. Army Cyber Institute, a military think tank tasked to prepare for near-future challenges the Army will face in the digital domain.” The article continues:

“The threatcasting process might generate only negative visions if we stopped here. However, the group then use the science-fiction prototype to explore the factors and events that led to the threat. This helps them think more clearly how to disrupt, lessen, or recover from the potential threats. From this the group proposes short-term, actionable steps to implement today to nudge society away from potential threats.”

So, as I have already said, this is a very close cousin of my brand of design fiction. Where it differs is that it focuses on threats, the downsides and unintended consequences of many of the technologies that we take for granted. Of course, design fiction can do this as well, but design fiction has many flavors, and not all of them deal with future downsides.

Design fictions, however, are supposed to be provocations, and I am an advocate of the idea that tension creates the most successful provocations. We could paint utopian futures, a picture of what the world will be like should everything work out flawlessly, but that is not the essential ingredient of my brand of design fiction nor is it the real nature of things. However, my practice is not altogether dystopian either because our future will not likely be either
one or the other, but rather a combination that includes elements of both. I posit that our greatest possible impact will be to examine the errors that inevitably accompany progress and change. These don’t have to be apocalyptic. Sometimes they can be subtle and mundane. They creep up on us until one day we realize that we have changed.

As for guerrilla futures, this term comes from futurist and scholar, Stewart Candy. Here the idea is to insert the future
into the present “to expose publics to possibilities that they are unable or unwilling to give proper consideration. Whether or not they have asked for it.” All to raise awareness of the future, to discuss it and debate it in the present. My provocations are a bit more subtle and less nefarious than the threatcasting folks. Rather than terrorist attacks or hackers shutting down the power grid, I focus on the more nuanced possibilities of our techno-social future, things like ubiquitous surveillance, the loss of privacy, and our subtlely changing behaviors.

Nevertheless, I applaud this threatcasting business, and we need more of it, and there’s plenty of room for both of us.

Bookmark and Share

Ethical tech.

Though I tinge most of my blogs with ethical questions, the last time I brought up this topic specifically on this was back in 2015. I guess I am ready to give it another go. Ethics is a tough topic. If we deal with this purely superficially, ethics would seem natural, like common sense, or the right thing to do. But if that’s the case, why do so many people do the wrong thing? Things get even more complicated if we move into institutionally complex issues like banking, or governing, technology, genetics, health care or national defense, just to name a few.

The last time I wrote about this, I highlighted Michael Sandel Professor of Philosophy and Government at Harvard’s Law School, where he teaches a wildly popular course called “Justice.” Then, I was glad to see that the big questions were still being addressed in in places like Harvard. Some of his questions then, which came from a FastCo article, were:

“Is it right to take from the rich and give to the poor? Is it right to legislate personal safety? Can torture ever be justified? Should we try to live forever? Buy our way to the head of the line? Create perfect children?”

These are undoubtedly important and prescient questions to ask, especially as we are beginning to confront technologies that make things which were formerly inconceivable or plain impossible, not only possible but likely.

So I was pleased to see last month, an op-ed piece in WIRED by Susan Liautaud founder of The Ethics Incubator. Susan is about as closely aligned to my tech concerns as anyone I have read. And she brings solid thinking to the issues.

“Technology is approaching the man-machine and man-animal
boundaries. And with this, society may be leaping into humanity defining innovation without the equivalent of a constitutional convention to decide who should have the authority to decide whether, when, and how these innovations are released into society. What are the ethical ramifications? What checks and balances might be important?”

Her comments are right in line with my research and co-research into Humane Technologies. Liataud continues:

“Increasingly, the people and companies with the technological or scientific ability to create new products or innovations are de facto making policy decisions that affect human safety and society. But these decisions are often based on the creator’s intent for the product, and they don’t always take into account its potential risks and unforeseen uses. What if gene-editing is diverted for terrorist ends? What if human-pig chimeras mate? What if citizens prefer to see birds rather than flying cars when they look out a window? (Apparently, this is a real risk. Uber plans to offer flight-hailing apps by 2020.) What if Echo Look leads to mental health issues for teenagers? Who bears responsibility for the consequences?”

For me, the answer to that last question is all of us. We should not rely on business and industry to make these decisions, nor expect our government to do it. We have to become involved in these issues at the public level.

Michael Sandel believes that the public is hungry for these issues, but we tend to shy away from them. They can be confrontational and divisive, and no one wants to make waves or be politically incorrect. That’s a mistake.

An image from the future. A student design fiction project that examined ubiquitous AR.

So while the last thing I want is a politician or CEO making these decisions, these two constituencies could do the responsible thing and create forums for these discussions so that the public can weigh in on them. To do anything less, borders on arrogance.

Ultimately we will have to demand this level of thought, beginning with ourselves. This responsibility should start with anticipatory methodologies that examine the social, cultural and behavioral ramifications, and unintended consequences of what we create.

But we should not fight this alone. Corporations and governments concerned with appearing sensitive and proactive toward the environment and social justice need to add a new pillar to their edifice as responsible global citizens: humane technology.

 

Bookmark and Share

Augmented evidence. It’s a logical trajectory.

A few weeks ago I gushed about how my students killed it at a recent guerrilla future enactment on a ubiquitous Augmented Reality (AR) future. Shortly after that, Mark Zuckerberg announced the Facebook AR platform. The AR uses the camera on your smartphone, and according to a recent WIRED article, transforms your smartphone into an AR engine.

Unfortunately, as we all know, (and so does Zuck), the smartphone isn’t currently much of an engine. AR requires a lot of processing, and so does the AI that allows it to recognize the real world so it can layer additional information on top of it. That’s why Facebook (and others), are building their own neural network chips so that the platform doesn’t have to run to the Cloud to access the processing required for Artificial Intelligence (AI). That will inevitably happen which will make the smartphone experience more seamless, but that’s just part the challenge for Facebook.

If you add to that the idea that we become even more dependent on looking at our phones while we are walking or worse, driving, (think Pokemon GO), then this latest announcement is, at best, foreshadowing.

As the WIRED article continues, tech writer Brian Barrett talked to Blair MacIntyre, from Georgia Tech who says,

“The phone has generally sucked for AR because holding it up and looking through it is tiring, awkward, inconvenient, and socially unacceptable,” says MacIntyre. Adding more of it doesn’t solve those issues. It exacerbates them. (The exception might be the social acceptability part; as MacIntyre notes, selfies were awkward until they weren’t.)”

That last part is an especially interesting point. I’ll have to come back to that in another post.

My students did considerable research on exactly this kind of early infancy that technologies undergo on their road to ubiquity. In another WIRED article, even Zuckerberg admitted,

“We all know where we want this to get eventually,” said Zuckerberg in his keynote. “We want glasses, or eventually contact lenses, that look and feel normal, but that let us overlay all kinds of information and digital objects on top of the real world.”

So there you have it. Glasses are the end game, but as my students agreed, contact lenses not so much. Think about it. If you didn’t have to stick a contact lens in your eyeball, you wouldn’t and the idea that they could become ubiquitous (even if you solved the problem of computing inside a wafer thin lens and the myriad of problems with heat, and in-eye-time), they are much farther away, if ever.

Student design team from Ohio State’s Collaborative Studio.

This is why I find my student’s solution so much more elegant and a far more logical trajectory. According to Barrett,

“The optimistic timeline for that sort of tech, though, stretches out to five or 10 years. In the meantime, then, an imperfect solution takes the stage.”

My students locked it down to seven years.

Finally, Zuckerberg made this statement:

“Augmented reality is going to help us mix the digital and physical in all new ways,” said Zuckerberg at F8. “And that’s going to make our physical reality better.”

Except that Zuck’s version of better and mine or yours may not be the same. Exactly what is wrong with reality anyway?

If you want to see the full-blown presentation of what my students produced, you can view it at aughumana.net.

Note: Currently the AugHumana experience is superior on Google Chrome.  If you are a Safari or Firefox purest, you may have to wait for the page to load (up to 2 minutes). We’re working on this. So, just use Chrome this time. We hope to have it fixed soon.

Bookmark and Share

A guerrilla future realized.

This week my brilliant students in Collaborative Studio 4650 provided a real word guerrilla future for the Humane Technologies: Livable Futures Pop-Up Collaboration at The Ohio State University. The design fiction was replete with diegetic prototypes and a video enactment. Our goal was to present a believable future in 2024 when ubiquitous AR glasses are the part of our mundane everyday. We made the presentation in Sullivant Hall’s Barnett Theater, and each member of the team had a set of mock AR glasses. The audience consisted of about 50 students ranging from the humanities to business. It was an amazing experience. It has untold riches for my design fiction research, but there were also a lot of revelations about how we experience, and enfold technology. After the presentation, we pulled out the white paper and markers and divided up into groups for a more detailed deconstruction of what transpired. While I have not plowed through all the scrolls that resulted from the post-presentation discussion groups, it seems universal that we can recognize how technology is apt to modify our behavior. It is also interesting to see that most of us have no clue how to resist these changes. Julian Oliver wrote in his (2011) The Critical Engineering Manifesto,

“5. The Critical Engineer recognises that each work of engineering engineers its user, proportional to that user’s dependency upon it.”

The idea of being engineered by our technology was evident throughout the AugHumana presentation video, and in discussions, we quickly identified the ways in which our current technological devices engineer us. At the same time, we feel more or less powerless to change or effect that phenomenon. Indeed, we have come to accept these small, incremental, seemingly mundane, changes to our behavior as innocent or adaptive in a positive way. En masse, they are neither. Kurzweil stated that,

‘We are not going to reach the Singularity in some single great leap forward, but rather through a great many small steps, each seemingly benign and modest in scope.’

History has shown that these steps are incrementally embraced by society and often give way to systems with a life of their own. An idea raised in one discussion group was labeled as effective dissent, but it seems almost obvious that unless we anticipate these imminent behavioral changes, by the time we notice them it is already too late, either because the technology is already ubiquitous or our habits and procedures solidly support that behavior.

There are ties here to material culture and the philosophy of technology that merits more research, but the propensity for technology to affect behavior in an inhumane way is powerful. These are early reflections, no doubt to be continued.

Bookmark and Share

Step inside The Lightstream Chronicles

Some time ago I promised to step inside one of the scenes from The Lighstream Chronicles. Today, to commemorate the debut of Season 5—that goes live today—I’m going to deliver on that promise, partially.

 

Background

The notion started after giving my students a tour of the Advanced Computing Center for Arts and Design (ACCAD)s motion-capture lab. We were a discussing VR, and sadly, despite all the recent hype, very few of us—including me—had never experienced state-of-the-art Virtual Reality. In that tour, it occurred to me that through the past five years of continuous work on my graphic novel, a story built entirely in CG, I have a trove of scenes and scenarios that I could in effect step into. Of course, it is not that simple, as I have discovered this summer working with ACCADs animation specialist Vita Berezina-Blackburn. It turns out that my extreme high-resolution images are not ideally compatible with the Oculus pipeline.

The idea was, at first, a curiosity for me, but it became quickly apparent that there was another level of synergy with my work in guerrilla futures, a flavor of design fiction.

Design fiction, my focus of study, centers on the idea that, through prototypes and future narratives we can engage people in thinking about possible futures, discuss and debate them and instill the idea of individual agency in shaping them. Unfortunately, too much design fiction ends up in the theoretical realm within the confines of the art gallery, academic conferences or workshops. The instances are few where the general public receives a future experience to contemplate and consider. Indeed, it has been something of a lament for me that my work in future fiction through the graphic novel, can be experienced as pure entertainment without acknowledging the deeper issues of its socio-techno themes. At the core of experiential design fiction introduced by Stewart Candy (2010) is the notion that future fiction can be inserted into everyday life whether the recipient has asked for them or not. The technique is one method of making the future real enough for us to ask whether this is the future we want and if not what might we do about it now.

Through my recent meanderings with VR, I see that this idea of immersive futures could be an incredibly powerful method of infusing these experiences.

The scene from Season 1 that I selected for this test.
The scene from Season 1 that I selected for this test.

 

About the video
This video is a test. We had no idea what we would get after I stripped down a scene from Season 1. Then we had a couple of weeks of trial and error re-making my files to be compatible with the system. Since one of the things that separate The Lightstream Chronicles from your average graphic novel/webcomic is the fact that you can zoom in 5x to inspect every detail, it is not uncommon, for example for me to have more than two hundred 4K textures in any given scene. It also allows me as the “director” to change it up and dolly in or out to focus on a character or object within a scene without a resulting loss in resolution. To me, it’s one of the drawbacks in many video games of getting in and inspecting a resident artifact. They usually start to “break up” into pixels the closer you get. However, in a real-time environment, you have to make concessions, at least for now, to make your textures render faster.

For this test, we didn’t apply all two hundred textures, just some essentials. For example the cordial glasses, the liquid in the bottle and the array of floating transparent files that hover over Techman’s desk. We did apply the key texture that defines the environment and that is the rusty, perforated metal wall that encloses Techman’s “safe-room” and protects it from eavesdropping. There are lots of other little glitches beyond unassigned textures, such as intersecting polygons and dozens of lighting tweaks that make this far from prime time.

In the average VR game, you move your controller forward through space while you are either seated or standing. Either way, in most cases you are stationary. What distinguishes this from most VR experiences is that I can physically walk through the scene.In this test, we were in the ACCAD motion capture lab.

Wearing the Oculus in the MoCap lab.
Wearing the Oculus in the MoCap lab while Lakshika manages the tether.

I’m sure you have seen pictures of this sort of thing before where characters strap on sensors to “capture their motions” and translate them to virtual CG characters. This was the space in which I was working. It has boundaries, however. So I had to obtain those boundaries, in scale to my scene so that I could be sure that the room and the characters were within the area of the lab. Dozens of tracking devices around the lab read sensors on the Oculus headset and ensure that once I strap it on, I can move freely within the limits of virtual space, and it would relate my movements to the context of the virtual scene.

Next week I’ll be going back into the lab with a new scene and take a look at Kristin Broulliard and Keiji in their exchange from episode 97 (page) Season 3.

Next time.
Next time.

Respond, reply, comment. Enjoy.

 

Bookmark and Share