Tag Archives: Anticipation

The right thing to do. Remember that idea?

I’ve been detecting some blowback recently regarding all the attention surrounding emerging AI, it’s near-term effect on jobs, and it’s long-term impact on humanity. Having an anticipatory mindset toward artificial intelligence is just the logical thing to do. As I have said before, designing a car without a braking system would be foolish. Anticipating the eventuality that you might need to slow down or stop the car is just good design. Nevertheless, there are a lot of people, important people in positions of power that think this is a lot of hooey. They must think that human ingenuity will address any unforeseen circumstances, that science is always benevolent, that stuff like AI is “a long way off,” that the benefits outweigh the downsides, and that all people are basically good. Disappointed I am that this includes our Treasury Secretary Steve Mnuchin. WIRED carried the story and so did my go-to futurist Amy Webb. In her newsletter Amy states,

“When asked about the future of artificial intelligence, automation and the workforce at an Axios event, this was Mnuchin’s reply: ‘It’s not even on our radar screen,’ he said, adding that significant workforce disruption due to AI is ‘50 to 100’ years away. ‘I’m not worried at all’”

Sigh! I don’t care what side of the aisle you’re on, that’s just plain naive. Turning a blind eye to potentially transformative technologies is also dangerous. Others are skeptical of any regulation (perhaps rightly so) that stifles innovation and progress. But safeguards and guidelines are not that. They are well-considered recommendations that are designed to protect while facilitating research and exploration. On the other side of the coin, they are also not laws, which means that if you don’t want to or don’t care to, you don’t have to follow them.

Nevertheless, I was pleased to see a relatively comprehensive set of AI principles that emerged from the Asilomar Conference that I blogged about a couple of weeks ago. The 2017 Asilomar conference organized by The Future of Life Institute,

“…brought together an amazing group of AI researchers from academia and industry, and thought leaders in economics, law, ethics, and philosophy for five days dedicated to beneficial AI.”

The gathering generated the Asilomar AI Principles, a remarkable first step on the eve of an awesome technological power. None of these people, from the panel I highlighted in the last blog, are anxious for regulation, but at the same time, they are aware of the enormous potential for bad actors to undermine whatever beneficial aspects of the technology might surface. Despite my misgivings, an AGI is inevitable. Someone is going to build it, and someone else will find a way to misuse it.

There are plenty more technologies that pose questions. One is nanotechnology. Unlike AI, Hollywood doesn’t spend much time painting nanotechnological dystopias, perhaps that along with the fact that they’re invisible to the naked eye, lets the little critters slip under the radar. While researching a paper for another purpose, I decided to look into nanotechnology to see what kinds of safeguards and guidelines are in place to deal with that rapidly emerging technology. There are clearly best practices by reputable researchers, scientists, and R&D departments but it was especially disturbing to find out that none of these are mandates. Especially since there are thousands of consumer products that use nanotechnology including food, cosmetics, clothing, electronics, and more. A nanometer is very small. Nanotech concerns itself with creations that exist in the 100nm range and below, roughly 7,500 times smaller than a human hair. In the Moore’s Law race, nanothings are the next frontier in cramming data onto a computer chip, or implanting them into our brains or living cells. However, due to their size, nanoparticles can also be inhaled, absorbed into the skin, flushed into the water supply and leeched into the soil. We don’t know what happens if we aggregate a large number of nanoparticles or differing combinations of nanoparticles in our body. We don’t even know how to test for it. And, get ready. Currently, there are no regulations. That means manufacturers do not need to disclose it, and there are no laws to protect the people who work with it. Herein, we have a classic example of bad decisions in the present that make for worse futures. Imagine the opposite: Anticipation of what could go wrong and sound industry intervention at a scale that pre-empts government intervention or the dystopian scenarios that the naysayers claim are impossible.

Bookmark and Share

A guerrilla future realized.

This week my brilliant students in Collaborative Studio 4650 provided a real word guerrilla future for the Humane Technologies: Livable Futures Pop-Up Collaboration at The Ohio State University. The design fiction was replete with diegetic prototypes and a video enactment. Out goal was to present a believable future in 2024 when ubiquitous AR glasses are the part of our mundane everyday. We made the presentation in Sullivant Hall’s Barnett Theater, and each member of the team had a set of mock AR glasses. The audience consisted of about 50 students ranging from the humanities to business. It was an amazing experience. It has untold riches for my design fiction research, but there were also a lot of revelations about how we experience, and enfold technology. After the presentation, we pulled out the white paper and markers and divided up into groups for a more detailed deconstruction of what transpired. While I have not plowed through all the scrolls that resulted from the post-presentation discussion groups, it seems universal that we can recognize how technology is apt to modify our behavior. It is also interesting to see that most of us have no clue how to resist these changes. Julian Oliver wrote in his (2011) The Critical Engineering Manifesto,

“5. The Critical Engineer recognises that each work of engineering engineers its user, proportional to that user’s dependency upon it.”

The idea of being engineered by our technology was evident throughout the AugHumana presentation video, and in discussions, we quickly identified the ways in which our current technological devices engineer us. At the same time, we feel more or less powerless to change or effect that phenomenon. Indeed, we have come to accept these small, incremental, seemingly mundane, changes to our behavior as innocent or adaptive in a positive way. En masse, they are neither. Kurzweil stated that,

‘We are not going to reach the Singularity in some single great leap forward, but rather through a great many small steps, each seemingly benign and modest in scope.’

History has shown that these steps are incrementally embraced by society and often give way to systems with a life of their own. An idea raised in one discussion group was labeled as effective dissent, but it seems almost obvious that unless we anticipate these imminent behavioral changes, by the time we notice them it is already too late, either because the technology is already ubiquitous or our habits and procedures solidly support that behavior.

There are ties here to material culture and the philosophy of technology that merits more research, but the propensity for technology to affect behavior in an inhumane way is powerful. These are early reflections, no doubt to be continued.

Bookmark and Share

An Experiment in Ubiquitous Surveillance

 

I just returned from the First International Conference on Anticipation in Trento, Italy. The conference was a multi-disciplinary gathering of scholars, practitioners, and thought leaders with the same concern: the future is happening faster than we could ever have imagined. The foundational principles of our disciplines that have anchored us since their inception are no longer sufficient to deal with a future that is increasingly unpredictable. The conference featured experts in economics, the environment, biology, architecture, city planning, design, future studies, foresight, political science, psychology, sociology, and anthropology just to name a few. Each has deep concerns about how to model the future of their disciplines and their relationships with the world around them when our existing frameworks no longer fit and complexity and technology are increasing exponentially.

I presented a paper as part of the Design and Anticipation panel entitled, Ubiquitous Surveillance: A Crowd-Sourced Design Fiction. I began by painting the landscape of change and borrowed (as I have often done in this blog) from Ray Kurzweil’s Law of Accelerating Returns. He states that “We won’t experience 100 years of progress in the 21st century — it will be more like 20,000 years of progress (at today’s rate).” The uncertainty is compounded by the reality of technological convergence; the merging of cognitive science with genetics and nanotech or biotech, infotech, robotics, and artificial intelligence. All of these fields are racing toward breakthrough accomplishments. Of course, they cannot be isolated and so the picture changes, in a dynamic and unpredictable way. The reverberations will be sweeping. As I discussed in my paper, there is a natural compatibility between design and future studies, since, “…all design concerns itself with some future, preferably better, whether physical, environmental, or conceptual. Design is creative and iterative. So it is with futures.”

I explained the notion of design fiction as a hybrid of science fiction narrative, critical design, conventional design and foresight studies. The objective is to provoke interdisciplinary conversations and reflect on the significance of innovation for societies, governments, culture, and individuals. My methods include The Lightstream Chronicles and my newest area, guerrilla futures. In both cases the aim is,

“…to draw a larger circle for these conversations extending beyond academia, governmental inertia, and commercial influence. And to include those who will be affected most by these changes to lifestyle and behavior: the public-at-large.

In storytelling, the focus is on people and drama; there are interactions, and sometimes things go wrong. The fictional story becomes a way for us to anticipate conflict and complexity before it becomes a problem to be solved — a kind of thought problem to engage critical thinking. However, surrounding these issues with the expected, as utopian, or idealistic they risk losing force. Thus, for the story to have the potential of moving beyond merely an entertainment, the ideas must be disruptive enough for the individual to take pause.”

All of this is a lengthy set-up for my current experiment to generate discussion about the future: Ubiquitous Surveillance. The following is a direct lift from my presentation.

“Imagine if you will that the year is 2020. Political and commercial influences have convinced global society that not only our security, but our convenience and fulfillment will be enhanced via ubiquitous surveillance, e.g., cameras everywhere. Let us pull some plausible threads of existing technological advances: It is now possible to have cameras the size and thickness of a postage stamp. These PaperCams can be “posted” anywhere and are available to everyone for no fee. Once distributed, (ideally 1/3m3) imagery and location data is networked into a massive database. A smartphone app can locate and link to any PaperCam and allow users, positioned in front, to transmit a still or video image to anyone at any time from any place—no selfie required. GPS metadata verifies location and group photos take on a new significance. It is touted as both a communication convenience and a security benefit. Imagery can employ facial recognition, and predictive algorithms to identify criminal behavior, potential terrorist events, Cameras can be used to locate disaster, accident, crime victims or for emergency visual anywhere.

Cameras are always on. They do not require our permission. To mitigate the potential adverse reaction to an invasion of privacy, only computers/artificial intelligence (AI) evaluate the images to identify potential threats. The increasing mass of big data enables facial recognition, predictive algorithms for body language, gestures, sounds, voice analysis and other cues. The AI can observe situations and determine whether they are dangerous or benign. Since other humans are not seeing the imagery, personal moments are not in danger of being perniciously viewed and would not be logged unless the AI detects threatening behavior.

A global security corporation, VisibleFutureCorp., has been retained to monitor the cameras.”

Where will the camera show up next?
the cam card

If you want to jump into this future scenario, I have developed a do-it-yourself camera that you can print out, place around your environment, office, (every room in your home) so that it is impossible to go through the day without noticing one of the cameras watching you. After this experience, visit the VisibleFutureCorp. website and get a bit deeper into the experience and it’s believability. There is a link on that site to join in the conversation.

I hope you will try it out.

Bookmark and Share