Tag Archives: behavior

Of Threatcasting

Until a Google alert came through my email this week, I have to admit, I had never heard the term threatcasting. I clicked-in to an article in Slate that gave me the overview, and when I discovered that threatcasting is a blood-relative to guerrilla futures, I was more than intrigued. First, let’s bring you up to speed on threatcasting and then I will remind my readers about this guerrilla business.

The Slate article was written by futurist Brian David Johnson, formerly of Intel and now in residence at Arizona State University, and Natalie Vanatta a U.S. Army Cyber officer with a Ph.D. in applied mathematics currently researching in a military think tank. These folks are in the loop, and kudos to ASU for being a leader in bringing thought leaders, creators and technologists together to look at the future. According to the article, threatcasting is “… a conceptual process used to envision and plan for risks 10 years in the future.” If you know what my research focus is, then you know we are already on the same page. The two writers work with “Arizona State University’s Threatcasting Lab, whose mission is to use threatcasting to envision futures that empower actions.” The lab creates future scenarios that bring together “… experts in social science, technology, economics, cultural history, and other fields.” Their future scenarios have inspired companies like CISCO, through the Cisco Hyperinnovation Living Labs (CHILL), to create a two-day summit to look at countermeasures for threats to the Internet of Things. They also work with the “… U.S. Army Cyber Institute, a military think tank tasked to prepare for near-future challenges the Army will face in the digital domain.” The article continues:

“The threatcasting process might generate only negative visions if we stopped here. However, the group then use the science-fiction prototype to explore the factors and events that led to the threat. This helps them think more clearly how to disrupt, lessen, or recover from the potential threats. From this the group proposes short-term, actionable steps to implement today to nudge society away from potential threats.”

So, as I have already said, this is a very close cousin of my brand of design fiction. Where it differs is that it focuses on threats, the downsides and unintended consequences of many of the technologies that we take for granted. Of course, design fiction can do this as well, but design fiction has many flavors, and not all of them deal with future downsides.

Design fictions, however, are supposed to be provocations, and I am an advocate of the idea that tension creates the most successful provocations. We could paint utopian futures, a picture of what the world will be like should everything work out flawlessly, but that is not the essential ingredient of my brand of design fiction nor is it the real nature of things. However, my practice is not altogether dystopian either because our future will not likely be either
one or the other, but rather a combination that includes elements of both. I posit that our greatest possible impact will be to examine the errors that inevitably accompany progress and change. These don’t have to be apocalyptic. Sometimes they can be subtle and mundane. They creep up on us until one day we realize that we have changed.

As for guerrilla futures, this term comes from futurist and scholar, Stewart Candy. Here the idea is to insert the future
into the present “to expose publics to possibilities that they are unable or unwilling to give proper consideration. Whether or not they have asked for it.” All to raise awareness of the future, to discuss it and debate it in the present. My provocations are a bit more subtle and less nefarious than the threatcasting folks. Rather than terrorist attacks or hackers shutting down the power grid, I focus on the more nuanced possibilities of our techno-social future, things like ubiquitous surveillance, the loss of privacy, and our subtlely changing behaviors.

Nevertheless, I applaud this threatcasting business, and we need more of it, and there’s plenty of room for both of us.

Bookmark and Share

Corporate Sci-Fi.

Note: Also published on LinkedIn

 

Why your company needs to play in the future.

As a professor of design and a design fiction researcher, I write academic papers and blog weekly about the future. I teach about the future of design, and I create future scenarios, sometimes with my students, that provoke us to look at what we are doing, what we are making, why we are making it and the ramifications that are inevitable. Primarily I try to focus both designers and decision makers on the steps they can take today to keep from being blindsided tomorrow. Futurists seem to be all the rage these days telling us to prepare for the Singularity, autonomous everything, or that robots will take our jobs. Recently, Jennifer Doudna, co-inventor of the gene editing technique called CrisprCas9 has been making the rounds and sounding the alarm that technology is moving so fast that we aren’t going to be able to contain a host of unforeseen (and foreseen) circumstances inside Pandora’s box. This concern should be prevalent, however, beyond just the bioengineering fields and extend into virtually anywhere that technology is racing forward fueled by venture capital and the desperate need to stay on top of whatever space in which we are playing. There is a lot at stake. Technology has already redefined privacy, behavioral wellness, personal autonomy, healthcare, labor, and maybe even our humanness, just to name a few.

Several recent articles have highlighted the changing world of design and how the pressure is on designers to make user adoption more like user addiction to ensure the success of a product or app. The world of behavioral economics is becoming a new arena in which we are using algorithms to manipulate users. Some designers are passing the buck to the clients or corporations that employ them for the questionable ethics of addictive products; others feel compelled to step aside and work on less lucrative projects or apply their skills to social causes. Most really care and want to help. But designers are uniquely positioned and trained to tackle these wicked problems—if we would collaborate with them.

Beyond the companies that might be deliberately trying to manipulate us, are those that unknowingly, or at least unintentionally, transform our behaviors in ways that are potentially harmful. Traditionally, we seek to hold someone responsible when a product or service is faulty, the physician for malpractice, the designer or manufacturer when a toy causes injury, a garment falls apart, or an appliance self-destructs. But as we move toward systemic designs that are less physical and more emotional, behavioral, or biological, design faults may not be so easy to identify and their repercussions noticeable only after serious issues have arisen. In fact, we launch many of the apps and operating systems used today with admitted errors and bugs. Designers rely on real-life testing to identify problems, issue patches, revisions, and versions.

In the realm of nanotechnology, while scientists and thought leaders have proposed guidelines and best-practices, research and development teams in labs around the world race forward without regulation creating molecule-sized structures, machines, and substances with no idea whether they are safe or what might be long-term effects of exposure to these elements. In biotechnology, while folks like Jennifer Doudna appeal to a morally ethical cadre of researchers to tread carefully in the realm of genetic engineering (especially when it comes to inheritable gene manipulation) we do not universally share those morals and ethics. Recent headlines attest to the fact that some scientists are bent on moving forward regardless of the implications.

Some technologies such as our smartphones have become equally invasive technology, yet they are now considered mundane. In just ten years since the introduction of the iPhone, we have transformed behaviors, upended our modes of communication, redefined privacy, distracted our attentions, distorted reality and manipulated a predicted 2.3 billion users as of 2017. [1] It is worth contemplating that this disruption is not from a faulty product, but rather one that can only be considered wildly successful.

There are a plethora of additional technologies that are poised to refine our worlds yet again including artificial intelligence, ubiquitous surveillance, human augmentation, robotics, virtual, augmented and mixed reality and the pervasive Internet of Things. Many of these technologies make their way into our experiences through the promise of better living, medical breakthroughs, or a safer and more secure life. But too often we ignore the potential downsides, the unintended consequences, or the systemic ripple-effects that these technologies spawn. Why?

In many cases, we do not want to stand in the way of progress. In others, we believe that the benefits outweigh the disadvantages, yet this is the same thinking that has spawned some of our most complex and daunting systems, from nuclear weapons to air travel and the internal combustion engine. Each of these began with the best of intentions and, in many ways were as successful and initially beneficial as they could be. At the same time, they advanced and proliferated far more rapidly than we were prepared to accommodate. Dirty bombs are a reality we did not expect. The alluring efficiency with which we can fly from one city to another has nevertheless spawned a gnarly network of air traffic, baggage logistics, and anti-terrorism measures that are arguably more elaborate than getting an aircraft off the ground. Traffic, freeways, infrastructure, safety, and the drain on natural resources are complexities never imagined with the revolution of personal transportation. We didn’t see the entailments of success.

This is not always true. There have often been scientists and thought leaders who were waving the yellow flag of caution. I have written about how, “back in 1975, scientists and researchers got together at Asilomar because they saw the handwriting on the wall. They drew up a set of resolutions to make sure that one day the promise of Bioengineering (still a glimmer in their eyes) would not get out of hand.”[2] Indeed, researchers like Jennifer Doudna continue to carry the banner. A similar conference took place earlier this year to alert us to the potential dangers of technology and earlier this year another to put forth recommendations and guidelines to ensure that when machines are smarter than we are they carry on in a beneficent role. Too often, however, it is the scientists and visionaries who attend these conferences. [3] Noticeably absent, though not always, is corporate leadership.

Nevertheless, in this country, there remains no safeguarding regulation for nanotech, nor bioengineering, nor AI research. It is a free-for-all, and all of which could have massive disruption not only to our lifestyles but also our culture, our behavior, and our humanness. Who is responsible?

For nearly 40 years there has been an environmental movement that has spread globally. Good stewardship is a good idea. But it wasn’t until most corporations saw a way for it to make economic sense that they began to focus on it and then promote it as their contribution to society, their responsibility, and their civic duty. As well intentioned as they may be (and many are) much more are not paying attention to the effect of their technological achievements on our human condition.

We design most technologies with a combination of perceived user need and commercial potential. In many cases, these are coupled with more altruistic motivations such as a “do no harm” commitment to the environment and fair labor practices. As we move toward the capability to change ourselves in fundamental ways, are we also giving significant thought to the behaviors that we will engender by such innovations, or the resulting implications for society, culture, and the interconnectedness of everything?

Enter Humane Technology

Ultimately we will have to demand this level of thought, beginning with ourselves. But we should not fight this alone. Corporations concerned with appearing sensitive and proactive toward the environment and social justice need to add a new pillar to their edifice as responsible global citizens: humane technology.

Humane technology considers the socio-behavioral ramifications of products and services: digital dependencies, and addictions, job loss, genetic repercussions, the human impact from nanotechnologies, AI, and the Internet of Things.

To whom do we turn when a 14-year-old becomes addicted to her smartphone or obsessed with her social media popularity? We could condemn the parents for lack of supervision, but many of them are equally distracted. Who is responsible for the misuse of a drone to vandalize property or fire a gun or the anticipated 1 billion drones flying around by 2030? [4] Who will answer for the repercussions of artificial intelligence that spouts hate speech? Where will the buck stop when genetic profiling becomes a requirement for getting insured or getting a job?

While the backlash against these types of unintended consequences or unforeseen circumstances are not yet widespread and citizens have not taken to the streets in mass protests, behavioral and social changes like these may be imminent as a result of dozens of transformational technologies currently under development in labs and R&D departments across the globe. Who is looking at the unforeseen or the unintended? Who is paying attention and who is turning a blind eye?

It was possible to have anticipated texting and driving. It is possible to anticipate a host of horrific side effects from nanotechnology to both humans and the environment. It’s possible to tag the ever-present bad actor to any number of new technologies. It is possible to identify when the race to master artificial intelligence may be coming at the expense of making it safe or drawing the line. In fact, it is a marketing opportunity for corporate interests to take the lead and the leverage their efforts to preempt adverse side effects as a distinctive advantage.

Emphasizing humane technology is an automatic benefit for an ethical company, and for those more concerned with profit than ethics, (just between you and me) it offers the opportunity for a better brand image and (at least) the appearance of social concern. Whatever the motivation, we are looking at a future where we are either prepared for what happens next, or we are caught napping.

This responsibility should start with anticipatory methodologies that examine the social, cultural and behavioral ramifications, and unintended consequences of what we create. Designers and those trained in design research are excellent collaborators. My brand of design fiction is intended to take us into the future in an immersive and visceral way to provoke the necessary discussion and debate that anticipate the storm should there be one, but promising utopia is rarely the tinder to fuel a provocation. Design fiction embraces the art critical thinking and thought problems as a means of anticipating conflict and complexity before these become problems to be solved.

Ultimately we have to depart from the idea that technology will be the magic pill to solve the ills of humanity, design fiction, and other anticipatory methodologies can help to acknowledge our humanness and our propensity to foul things up. If we do not self-regulate, regulation will inevitably follow, probably spurred on by some unspeakable tragedy. There is an opportunity, now for the corporation to step up to the future with a responsible, thoughtful compassion for our humanity.

 

 

1. https://www.statista.com/statistics/330695/number-of-smartphone-users-worldwide/

2. http://theenvisionist.com/2017/08/04/now-2/

3. http://theenvisionist.com/2017/03/24/genius-panel-concerned/

4. http://www.abc.net.au/news/2017-08-31/world-of-drones-congress-brisbane-futurist-thomas-frey/8859008

Bookmark and Share

Are you listening to the Internet of Things? Someone is.

As usual, it is a toss up for what I should write about this week. Is it, WIRED’s article on the artificial womb, FastCo’s article on design thinking, the design fiction world of the movie The Circle, or WIRED’s warning about apps using your phone’s microphone to listen for ultrasonic marketing ‘beacons’ that you can’t hear? Tough call, but I decided on a different WIRED post that talked about the vision of Zuckerberg’s future at F8. Actually, the F8 future is a bit like The Circle anyway so I might be killing two birds with one stone.

At first, I thought the article titled, “Look to Zuck’s F8, Not Trump’s 100 Days, to See the Shape of the Future,” would be just another Trump bashing opportunity, (which I sometimes think WIRED prefers more than writing about tech) but not so. It was about tech, mostly.

The article, written by Zachary Karabell starts out with this quote,

“While the fate of the Trump administration certainly matters, it may shape the world much less decisively in the long-term than the tectonic changes rapidly altering the digital landscape.”

I believe this statement is dead-on, but I would include the entire “technological” landscape. The stage is becoming increasingly “set,” as the article continues,

“At the end of March, both the Senate and the House voted to roll back broadband privacy regulations that had been passed by the Federal Communications Commission in 2016. Those would have required internet service providers to seek customers’ explicit permission before selling or sharing their browsing history.”

Combine that with,

“Facebook[s] vision of 24/7 augmented reality with sensors, camera, and chips embedded in clothing, everyday objects, and eventually the human body…”

and the looming possibility of ending net neutrality, we could be setting ourselves up for the real Circle future.

“A world where data and experiences are concentrated in a handful of companies with what will soon be trillion dollar capitalizations risks being one where freedom gives way to control.”

To add kindling to this thicket, there is the Quantified Self movement (QS). According to their website,

“Our mission is to support new discoveries about ourselves and our communities that are grounded in accurate observation and enlivened by a spirit of friendship.”

Huh? Ok. But they want to do this using “self-tracking tools.” This means sensors. They could be in wearables or implantables or ingestibles. Essentially, they track you. Presumably, this is all so that we become more self-aware, and more knowledgeable about our selves and our behaviors. Health, wellness, anxiety, depression, concentration; the list goes on. Like many emerging movements that are linked to technologies, we open the door through health care or longevity, because it is an easy argument that being healty or fit is better than sick and out of shape. But that is all too simple. QS says that that we gain “self knowledge through numbers,” and in the digital age that means data. In a climate that is increasingly less regulatory about what data can be shared and with whom, this could be the beginings of the perfect storm.

As usual, I hope I’m wrong.

 

 

 

Bookmark and Share

A guerrilla future realized.

This week my brilliant students in Collaborative Studio 4650 provided a real word guerrilla future for the Humane Technologies: Livable Futures Pop-Up Collaboration at The Ohio State University. The design fiction was replete with diegetic prototypes and a video enactment. Our goal was to present a believable future in 2024 when ubiquitous AR glasses are the part of our mundane everyday. We made the presentation in Sullivant Hall’s Barnett Theater, and each member of the team had a set of mock AR glasses. The audience consisted of about 50 students ranging from the humanities to business. It was an amazing experience. It has untold riches for my design fiction research, but there were also a lot of revelations about how we experience, and enfold technology. After the presentation, we pulled out the white paper and markers and divided up into groups for a more detailed deconstruction of what transpired. While I have not plowed through all the scrolls that resulted from the post-presentation discussion groups, it seems universal that we can recognize how technology is apt to modify our behavior. It is also interesting to see that most of us have no clue how to resist these changes. Julian Oliver wrote in his (2011) The Critical Engineering Manifesto,

“5. The Critical Engineer recognises that each work of engineering engineers its user, proportional to that user’s dependency upon it.”

The idea of being engineered by our technology was evident throughout the AugHumana presentation video, and in discussions, we quickly identified the ways in which our current technological devices engineer us. At the same time, we feel more or less powerless to change or effect that phenomenon. Indeed, we have come to accept these small, incremental, seemingly mundane, changes to our behavior as innocent or adaptive in a positive way. En masse, they are neither. Kurzweil stated that,

‘We are not going to reach the Singularity in some single great leap forward, but rather through a great many small steps, each seemingly benign and modest in scope.’

History has shown that these steps are incrementally embraced by society and often give way to systems with a life of their own. An idea raised in one discussion group was labeled as effective dissent, but it seems almost obvious that unless we anticipate these imminent behavioral changes, by the time we notice them it is already too late, either because the technology is already ubiquitous or our habits and procedures solidly support that behavior.

There are ties here to material culture and the philosophy of technology that merits more research, but the propensity for technology to affect behavior in an inhumane way is powerful. These are early reflections, no doubt to be continued.

Bookmark and Share