Tag Archives: security

Of Threatcasting

Until a Google alert came through my email this week, I have to admit, I had never heard the term threatcasting. I clicked-in to an article in Slate that gave me the overview, and when I discovered that threatcasting is a blood-relative to guerrilla futures, I was more than intrigued. First, let’s bring you up to speed on threatcasting and then I will remind my readers about this guerrilla business.

The Slate article was written by futurist Brian David Johnson, formerly of Intel and now in residence at Arizona State University, and Natalie Vanatta a U.S. Army Cyber officer with a Ph.D. in applied mathematics currently researching in a military think tank. These folks are in the loop, and kudos to ASU for being a leader in bringing thought leaders, creators and technologists together to look at the future. According to the article, threatcasting is “… a conceptual process used to envision and plan for risks 10 years in the future.” If you know what my research focus is, then you know we are already on the same page. The two writers work with “Arizona State University’s Threatcasting Lab, whose mission is to use threatcasting to envision futures that empower actions.” The lab creates future scenarios that bring together “… experts in social science, technology, economics, cultural history, and other fields.” Their future scenarios have inspired companies like CISCO, through the Cisco Hyperinnovation Living Labs (CHILL), to create a two-day summit to look at countermeasures for threats to the Internet of Things. They also work with the “… U.S. Army Cyber Institute, a military think tank tasked to prepare for near-future challenges the Army will face in the digital domain.” The article continues:

“The threatcasting process might generate only negative visions if we stopped here. However, the group then use the science-fiction prototype to explore the factors and events that led to the threat. This helps them think more clearly how to disrupt, lessen, or recover from the potential threats. From this the group proposes short-term, actionable steps to implement today to nudge society away from potential threats.”

So, as I have already said, this is a very close cousin of my brand of design fiction. Where it differs is that it focuses on threats, the downsides and unintended consequences of many of the technologies that we take for granted. Of course, design fiction can do this as well, but design fiction has many flavors, and not all of them deal with future downsides.

Design fictions, however, are supposed to be provocations, and I am an advocate of the idea that tension creates the most successful provocations. We could paint utopian futures, a picture of what the world will be like should everything work out flawlessly, but that is not the essential ingredient of my brand of design fiction nor is it the real nature of things. However, my practice is not altogether dystopian either because our future will not likely be either
one or the other, but rather a combination that includes elements of both. I posit that our greatest possible impact will be to examine the errors that inevitably accompany progress and change. These don’t have to be apocalyptic. Sometimes they can be subtle and mundane. They creep up on us until one day we realize that we have changed.

As for guerrilla futures, this term comes from futurist and scholar, Stewart Candy. Here the idea is to insert the future
into the present “to expose publics to possibilities that they are unable or unwilling to give proper consideration. Whether or not they have asked for it.” All to raise awareness of the future, to discuss it and debate it in the present. My provocations are a bit more subtle and less nefarious than the threatcasting folks. Rather than terrorist attacks or hackers shutting down the power grid, I focus on the more nuanced possibilities of our techno-social future, things like ubiquitous surveillance, the loss of privacy, and our subtlely changing behaviors.

Nevertheless, I applaud this threatcasting business, and we need more of it, and there’s plenty of room for both of us.

Bookmark and Share

Privacy vs. Security. The public forum begins.

Some people think I tend toward paranoia. If a lack of blind trust in the human condition means I am paranoid, then I guess I am. This topic comes up today as we see the average citizen joining the discussion about encryption, security, privacy and the smartphone. By now you have ( unless you are living under a rock) heard that Apple has been ordered to help unlock the iPhone that belonged to the Santa Barbara terrorist Syed Farook. Apple has refused. You can get the details in the previous links, but mainly Apple doesn’t even know how to get into this iPhone. They designed it that way. It keeps them from getting into your data and keeps everyone else out as well. Apple would have to write a new operating system for this particular phone, sign it (to prove that it came from Apple and not some hack) and then upload it to the phone so that the FBI could get in. In essence, it is a master key, because it’s constituent parts become part of a knowledge-base that can render your phone insecure. If you think the FBI would never use this key or the program written to make it work on any other phone, well then, I think you can safely say you are not paranoid. Programmers call this a “back door.” If you believe that only the FBI will discover, find, or hack into the back door (for good reason, of course), then you can safely say you are not paranoid. Furthermore, there is no reasonable protection for this new “backdoor” into our phones. Once there is another way, someone will find it.

What good is encryption if it isn’t encryption?

I don’t think this is an argument for whether the FBI is justified in wanting to know what is on that phone. It’s is about how they get it, and whether or not they will be able to get it from anyone else (for good reason, of course) the next time they are curious.

You can see how this plays out in my design fiction scenario: Ubiquitous Surveillance. Check it out.



Bookmark and Share

The killer feature for every app.

I have often asked the question: If we could visit the future “in-person” how would it affect us upon our return? How vigorously would we engage our redefined present? Part of the idea behind design fiction, for me, is making the future seem real enough to us that we want to discuss it and ask ourselves is this the future we want. If not, what can we do about it, how might it be changed, refined, or avoided altogether? In a more pessimistic light, I also wonder whether anything could be real enough to rouse us from our media-induced stupor. And the potion is getting stronger.

After Monday and Tuesday this week I was beginning to think it would be a slow news week in the future-tech sector. Not so. (At least I didn’t stumble on to them until Wednesday.)

1. Be afraid.

A scary new novel is out called Ghost Fleet. It sounds immensely entertaining, but also ominously possible. It harkens back to some of my previous blogs on autonomous weapons and the harbinger of ubiquitous hacking. How am I going to get time to read this? That’s another issue.

2. Play it again.

Google applied for this years ago, but their patent on storing “memories” was approved this week. It appears as though it would have been a feature for the ill-fated Google Glass but could easily be embedded in any visual recording function from networked cameras to a user’s contact lens. Essentially it lets you “play-back” whatever you saw, assuming you are wearing or integrating the appropriate recording device, or software. “Siri, replay my vacation!” I must admit it sounds cool.

Ghost Fleet, Google memories, uber hacking, Thync.
Ghost Fleet, Google memories, uber hacking, and Thync.

3. Hack-a-mania.

How’s this for a teaser? RESEARCHERS HACKED THE BRAKES OF A CORVETTE WITH TEXT MESSAGES. That’s what Fast Company threw out there on Wednesday, but it originated with WIRED magazine. It’s the latest since the Jeep-Jacking incident just weeks ago. See how fast technology moves? In that episode the hackers, or jackers, whatever, used their laptops to control just about every technology the Jeep had available. However, according to WIRED,

“…a new piece of research suggests there may be an even easier way for hackers to wirelessly access those critical driving functions: Through an entire industry of potentially insecure, internet-enabled gadgets plugged directly into cars’ most sensitive guts.”

In this instance,

“A 2-inch-square gadget that’s designed to be plugged into cars’ and trucks’ dashboards and used by insurance firms and trucking fleets to monitor vehicles’ location, speed and efficiency.”

The article clearly demonstrates that these devices are vulnerable to attack, even in government vehicles and, I presume the White House limo as well. You guys better get to work on that.

4. Think about this.

A new $300 device called Thync is now available to stick on your forehead to either relax or energize you through neurosignaling, AKA  electricity, that zaps your brain “safely”. It’s not unrelated to the less sexy shock therapy of ages past. Reports tell me that this is anything but all figured out, but just like the above list, it’s just a matter of time until it escalates to the next level.

So what ties all these together? If we look at the historical track of technology, the overarching theme is convergence. All the things that once were separate have now converged. Movies, texts, phone calls, games, GPS, bar-code scanning, cameras and about a thousand other technologies have converged into your phone or your laptop, or tablet. It is a safe bet to see that this trend will continue, in addition to getting smaller and eventually implanted. Isn’t technology wonderful?

The only problem is that we have yet to figure out the security issues. Do we, for one moment, think that hacking will go away? We rush new apps and devices to market with a “We’ll fix that later,” mentality. It’s just a matter of time until your energy, mood, “memories”, or our national security are up for grabs. Seems like security ought to be on the feature list of every new gadget, especially the ones that access out bodies, our safety, or our information. That’s pretty much everything, by the way. The idea is especially important because, let’s face it, everything we think is secure, isn’t.

Bookmark and Share