Tag Archives: technology

Of Threatcasting

Until a Google alert came through my email this week, I have to admit, I had never heard the term threatcasting. I clicked-in to an article in Slate that gave me the overview, and when I discovered that threatcasting is a blood-relative to guerrilla futures, I was more than intrigued. First, let’s bring you up to speed on threatcasting and then I will remind my readers about this guerrilla business.

The Slate article was written by futurist Brian David Johnson, formerly of Intel and now in residence at Arizona State University, and Natalie Vanatta a U.S. Army Cyber officer with a Ph.D. in applied mathematics currently researching in a military think tank. These folks are in the loop, and kudos to ASU for being a leader in bringing thought leaders, creators and technologists together to look at the future. According to the article, threatcasting is “… a conceptual process used to envision and plan for risks 10 years in the future.” If you know what my research focus is, then you know we are already on the same page. The two writers work with “Arizona State University’s Threatcasting Lab, whose mission is to use threatcasting to envision futures that empower actions.” The lab creates future scenarios that bring together “… experts in social science, technology, economics, cultural history, and other fields.” Their future scenarios have inspired companies like CISCO, through the Cisco Hyperinnovation Living Labs (CHILL), to create a two-day summit to look at countermeasures for threats to the Internet of Things. They also work with the “… U.S. Army Cyber Institute, a military think tank tasked to prepare for near-future challenges the Army will face in the digital domain.” The article continues:

“The threatcasting process might generate only negative visions if we stopped here. However, the group then use the science-fiction prototype to explore the factors and events that led to the threat. This helps them think more clearly how to disrupt, lessen, or recover from the potential threats. From this the group proposes short-term, actionable steps to implement today to nudge society away from potential threats.”

So, as I have already said, this is a very close cousin of my brand of design fiction. Where it differs is that it focuses on threats, the downsides and unintended consequences of many of the technologies that we take for granted. Of course, design fiction can do this as well, but design fiction has many flavors, and not all of them deal with future downsides.

Design fictions, however, are supposed to be provocations, and I am an advocate of the idea that tension creates the most successful provocations. We could paint utopian futures, a picture of what the world will be like should everything work out flawlessly, but that is not the essential ingredient of my brand of design fiction nor is it the real nature of things. However, my practice is not altogether dystopian either because our future will not likely be either
one or the other, but rather a combination that includes elements of both. I posit that our greatest possible impact will be to examine the errors that inevitably accompany progress and change. These don’t have to be apocalyptic. Sometimes they can be subtle and mundane. They creep up on us until one day we realize that we have changed.

As for guerrilla futures, this term comes from futurist and scholar, Stewart Candy. Here the idea is to insert the future
into the present “to expose publics to possibilities that they are unable or unwilling to give proper consideration. Whether or not they have asked for it.” All to raise awareness of the future, to discuss it and debate it in the present. My provocations are a bit more subtle and less nefarious than the threatcasting folks. Rather than terrorist attacks or hackers shutting down the power grid, I focus on the more nuanced possibilities of our techno-social future, things like ubiquitous surveillance, the loss of privacy, and our subtlely changing behaviors.

Nevertheless, I applaud this threatcasting business, and we need more of it, and there’s plenty of room for both of us.

Bookmark and Share

Corporate Sci-Fi.

Note: Also published on LinkedIn

 

Why your company needs to play in the future.

As a professor of design and a design fiction researcher, I write academic papers and blog weekly about the future. I teach about the future of design, and I create future scenarios, sometimes with my students, that provoke us to look at what we are doing, what we are making, why we are making it and the ramifications that are inevitable. Primarily I try to focus both designers and decision makers on the steps they can take today to keep from being blindsided tomorrow. Futurists seem to be all the rage these days telling us to prepare for the Singularity, autonomous everything, or that robots will take our jobs. Recently, Jennifer Doudna, co-inventor of the gene editing technique called CrisprCas9 has been making the rounds and sounding the alarm that technology is moving so fast that we aren’t going to be able to contain a host of unforeseen (and foreseen) circumstances inside Pandora’s box. This concern should be prevalent, however, beyond just the bioengineering fields and extend into virtually anywhere that technology is racing forward fueled by venture capital and the desperate need to stay on top of whatever space in which we are playing. There is a lot at stake. Technology has already redefined privacy, behavioral wellness, personal autonomy, healthcare, labor, and maybe even our humanness, just to name a few.

Several recent articles have highlighted the changing world of design and how the pressure is on designers to make user adoption more like user addiction to ensure the success of a product or app. The world of behavioral economics is becoming a new arena in which we are using algorithms to manipulate users. Some designers are passing the buck to the clients or corporations that employ them for the questionable ethics of addictive products; others feel compelled to step aside and work on less lucrative projects or apply their skills to social causes. Most really care and want to help. But designers are uniquely positioned and trained to tackle these wicked problems—if we would collaborate with them.

Beyond the companies that might be deliberately trying to manipulate us, are those that unknowingly, or at least unintentionally, transform our behaviors in ways that are potentially harmful. Traditionally, we seek to hold someone responsible when a product or service is faulty, the physician for malpractice, the designer or manufacturer when a toy causes injury, a garment falls apart, or an appliance self-destructs. But as we move toward systemic designs that are less physical and more emotional, behavioral, or biological, design faults may not be so easy to identify and their repercussions noticeable only after serious issues have arisen. In fact, we launch many of the apps and operating systems used today with admitted errors and bugs. Designers rely on real-life testing to identify problems, issue patches, revisions, and versions.

In the realm of nanotechnology, while scientists and thought leaders have proposed guidelines and best-practices, research and development teams in labs around the world race forward without regulation creating molecule-sized structures, machines, and substances with no idea whether they are safe or what might be long-term effects of exposure to these elements. In biotechnology, while folks like Jennifer Doudna appeal to a morally ethical cadre of researchers to tread carefully in the realm of genetic engineering (especially when it comes to inheritable gene manipulation) we do not universally share those morals and ethics. Recent headlines attest to the fact that some scientists are bent on moving forward regardless of the implications.

Some technologies such as our smartphones have become equally invasive technology, yet they are now considered mundane. In just ten years since the introduction of the iPhone, we have transformed behaviors, upended our modes of communication, redefined privacy, distracted our attentions, distorted reality and manipulated a predicted 2.3 billion users as of 2017. [1] It is worth contemplating that this disruption is not from a faulty product, but rather one that can only be considered wildly successful.

There are a plethora of additional technologies that are poised to refine our worlds yet again including artificial intelligence, ubiquitous surveillance, human augmentation, robotics, virtual, augmented and mixed reality and the pervasive Internet of Things. Many of these technologies make their way into our experiences through the promise of better living, medical breakthroughs, or a safer and more secure life. But too often we ignore the potential downsides, the unintended consequences, or the systemic ripple-effects that these technologies spawn. Why?

In many cases, we do not want to stand in the way of progress. In others, we believe that the benefits outweigh the disadvantages, yet this is the same thinking that has spawned some of our most complex and daunting systems, from nuclear weapons to air travel and the internal combustion engine. Each of these began with the best of intentions and, in many ways were as successful and initially beneficial as they could be. At the same time, they advanced and proliferated far more rapidly than we were prepared to accommodate. Dirty bombs are a reality we did not expect. The alluring efficiency with which we can fly from one city to another has nevertheless spawned a gnarly network of air traffic, baggage logistics, and anti-terrorism measures that are arguably more elaborate than getting an aircraft off the ground. Traffic, freeways, infrastructure, safety, and the drain on natural resources are complexities never imagined with the revolution of personal transportation. We didn’t see the entailments of success.

This is not always true. There have often been scientists and thought leaders who were waving the yellow flag of caution. I have written about how, “back in 1975, scientists and researchers got together at Asilomar because they saw the handwriting on the wall. They drew up a set of resolutions to make sure that one day the promise of Bioengineering (still a glimmer in their eyes) would not get out of hand.”[2] Indeed, researchers like Jennifer Doudna continue to carry the banner. A similar conference took place earlier this year to alert us to the potential dangers of technology and earlier this year another to put forth recommendations and guidelines to ensure that when machines are smarter than we are they carry on in a beneficent role. Too often, however, it is the scientists and visionaries who attend these conferences. [3] Noticeably absent, though not always, is corporate leadership.

Nevertheless, in this country, there remains no safeguarding regulation for nanotech, nor bioengineering, nor AI research. It is a free-for-all, and all of which could have massive disruption not only to our lifestyles but also our culture, our behavior, and our humanness. Who is responsible?

For nearly 40 years there has been an environmental movement that has spread globally. Good stewardship is a good idea. But it wasn’t until most corporations saw a way for it to make economic sense that they began to focus on it and then promote it as their contribution to society, their responsibility, and their civic duty. As well intentioned as they may be (and many are) much more are not paying attention to the effect of their technological achievements on our human condition.

We design most technologies with a combination of perceived user need and commercial potential. In many cases, these are coupled with more altruistic motivations such as a “do no harm” commitment to the environment and fair labor practices. As we move toward the capability to change ourselves in fundamental ways, are we also giving significant thought to the behaviors that we will engender by such innovations, or the resulting implications for society, culture, and the interconnectedness of everything?

Enter Humane Technology

Ultimately we will have to demand this level of thought, beginning with ourselves. But we should not fight this alone. Corporations concerned with appearing sensitive and proactive toward the environment and social justice need to add a new pillar to their edifice as responsible global citizens: humane technology.

Humane technology considers the socio-behavioral ramifications of products and services: digital dependencies, and addictions, job loss, genetic repercussions, the human impact from nanotechnologies, AI, and the Internet of Things.

To whom do we turn when a 14-year-old becomes addicted to her smartphone or obsessed with her social media popularity? We could condemn the parents for lack of supervision, but many of them are equally distracted. Who is responsible for the misuse of a drone to vandalize property or fire a gun or the anticipated 1 billion drones flying around by 2030? [4] Who will answer for the repercussions of artificial intelligence that spouts hate speech? Where will the buck stop when genetic profiling becomes a requirement for getting insured or getting a job?

While the backlash against these types of unintended consequences or unforeseen circumstances are not yet widespread and citizens have not taken to the streets in mass protests, behavioral and social changes like these may be imminent as a result of dozens of transformational technologies currently under development in labs and R&D departments across the globe. Who is looking at the unforeseen or the unintended? Who is paying attention and who is turning a blind eye?

It was possible to have anticipated texting and driving. It is possible to anticipate a host of horrific side effects from nanotechnology to both humans and the environment. It’s possible to tag the ever-present bad actor to any number of new technologies. It is possible to identify when the race to master artificial intelligence may be coming at the expense of making it safe or drawing the line. In fact, it is a marketing opportunity for corporate interests to take the lead and the leverage their efforts to preempt adverse side effects as a distinctive advantage.

Emphasizing humane technology is an automatic benefit for an ethical company, and for those more concerned with profit than ethics, (just between you and me) it offers the opportunity for a better brand image and (at least) the appearance of social concern. Whatever the motivation, we are looking at a future where we are either prepared for what happens next, or we are caught napping.

This responsibility should start with anticipatory methodologies that examine the social, cultural and behavioral ramifications, and unintended consequences of what we create. Designers and those trained in design research are excellent collaborators. My brand of design fiction is intended to take us into the future in an immersive and visceral way to provoke the necessary discussion and debate that anticipate the storm should there be one, but promising utopia is rarely the tinder to fuel a provocation. Design fiction embraces the art critical thinking and thought problems as a means of anticipating conflict and complexity before these become problems to be solved.

Ultimately we have to depart from the idea that technology will be the magic pill to solve the ills of humanity, design fiction, and other anticipatory methodologies can help to acknowledge our humanness and our propensity to foul things up. If we do not self-regulate, regulation will inevitably follow, probably spurred on by some unspeakable tragedy. There is an opportunity, now for the corporation to step up to the future with a responsible, thoughtful compassion for our humanity.

 

 

1. https://www.statista.com/statistics/330695/number-of-smartphone-users-worldwide/

2. http://theenvisionist.com/2017/08/04/now-2/

3. http://theenvisionist.com/2017/03/24/genius-panel-concerned/

4. http://www.abc.net.au/news/2017-08-31/world-of-drones-congress-brisbane-futurist-thomas-frey/8859008

Bookmark and Share

On utopia and dystopia. Part 1.

A couple of interesting articles cropped up in that past week or so coming out of the WIRED Business Conference. The first was an interview with Jennifer Doudna, a pioneer of Crispr/Cas9 the gene editing technique that makes editing DNA nearly as simple as splicing a movie together. That is if you’re a geneticist. According to the interview, most of this technology is at use in crop design, for things like longer lasting potatoes or wheat that doesn’t mildew. But Doudna knows that this is a potential Pandora’s Box.

“In 2015, Doudna was part of a broad coalition of leading biologists who agreed to a worldwide moratorium on gene editing to the “germ line,” which is to say, edits that get passed along to subsequent generations. But it’s legally non-binding, and scientists in China have already begun experiments that involve editing the genome of human embryos.”

Crispr May Cure All Genetic Disease—One Day

Super-babies are just one of the potential ways to misuse Crispr. I blogged a longer and more diabolical list a couple of years ago.

Meddling with the primal forces of nature.

In Doudna’s recent interview, though she focused on the more positive effects on farming, things like rice and tomatoes.

You may not immediately see the connection, but there was a related story from the same conference where WIRED interviewed Jonathan Nolan and Lisa Joy co-creators of the HBO series Westworld. If you haven’t seen Westworld, I recommend it if only for Anthony Hopkins’ performance. As far as I’m concerned Anthony Hopkins could read the phone book, and I would be spellbound.

At any rate, the article quotes:

“The first season of Westworld wasted no time in going from “hey cool, robots!” to “well, that was bleak.” Death, destruction, android torture—it’s all been there from the pilot onward.”

Which pretty much sums it up. According to Nolan,
“We’re inventing cautionary tales for ourselves…”

“And Joy sees Westworld, and sci-fi in general, as an opportunity to talk about what humanity could or should do if things start to go wrong, especially now that advancements in artificial intelligence technologies are making things like androids seem far more plausible than before. “We’re leaping into the age of the unfathomable, the time when machines [can do things we can’t],”

Joy said.

Westworld’s Creators Know Why Sci-Fi Is So Dystopian

To me, this sounds familiar. It is the essence of my particular brand of design fiction. I don’t always set out to make it dystopian but if we look at the way things seem to naturally evolve, virtually every technology once envisioned as a benefit to humankind ends up with someone misusing it. To look at any potentially transformative tech and not ask, “Transform into what?” is simply irresponsible. We love to sell our ideas on their promise of curing disease, saving lives, and ending suffering, but the technologies that we are designing today have epic downsides that many technologists do not even understand. Misuse happens so often that I’ve begun to see us a reckless if we don’t anticipate these repercussions in advance. It’s the subject of a new paper that I’m working toward.

In the meantime, it’s important that we pay attention and demand that others do, too.

There’s more from the science fiction world on utopias vs. dystopias, and I’ll cover that next week.

 

 

Bookmark and Share

An AI as President?

 

Back on May 19th, before I went on holiday, I promised to comment on an article that appeared that week advocating that we would better off with artificial intelligence (AI) as President of the United States. Joshua Davis authored the piece: Hear me out: Let’s Elect
An AI As President, for the business section of WIRED  online. Let’s start out with a few quotes.

“An artificially intelligent president could be trained to
maximize happiness for the most people without infringing on civil liberties.”

“Within a decade, tens of thousands of people will entrust their daily commute—and their safety—to an algorithm, and they’ll do it happily…The increase in human productivity and happiness will be enormous.”

Let’s start with the word happiness. What is that anyway? I’ve seen it around in several discourses about the future, that somehow we have to start focusing on human happiness above all things, but what makes me happy and what makes you happy may very well be different things. Then there is the frightening idea that it is the job of government to make us happy! There are a lot of folks out there that the government should give us a guaranteed income, pay for our healthcare, and now, apparently, it should also make us happy. If you haven’t noticed from my previous blogs, I am not a progressive. If you believe that government should undertake the happy challenge, you had better hope that their idea of happiness coincides with your own. Gerd Leonhard, a futurist whose work I respect, says that there are two types of happiness: first is hedonic (pleasure) which tends to be temporary, and the other is a eudaimonic happiness which he defines as human flourishing.1 I prefer the latter as it is likely to be more meaningful. Meaning is rather crucial to well-being and purpose in life. I believe that we should be responsible for our happiness. God help us if we leave it up to a machine.

This brings me to my next issue with this insane idea. Davis suggests that by simply not driving, there will be an enormous increase in human productivity and happiness. According to the website overflow data,

“Of the 139,786,639 working individuals in the US, 7,000,722, or about 5.01%, use public transit to get to work according to the 2013 American Communities Survey.”

Are those 7 million working individuals who don’t drive happier and more productive? The survey should have asked, but I’m betting the answer is no. Davis also assumes that everyone will be able to afford an autonomous vehicle. Maybe providing every American with an autonomous vehicle is also the job of the government.

Where I agree with Davis is that we will probably abdicate our daily commute to an algorithm and do it happily. Maybe this is the most disturbing part of his argument. As I am fond of saying, we are sponges for technology, and we often adopt new technology without so much as a thought toward the broader ramifications of what it means to our humanity.

There are sober people out there advocating that we must start to abdicate our decision-making to algorithms because we have too many decisions to make. They are concerned that the current state of affairs is simply too painful for humankind. If you dig into the rationale that these experts are using, many of them are motivated by commerce. Already Google and Facebook and the algorithms of a dozen different apps are telling you what you should buy, where you should eat, who you should “friend” and, in some cases, what you should think. They give you news (real or fake), and they tell you this is what will make you happy. Is it working? Agendas are everywhere, but very few of them have you in the center.

As part of his rationale, Davis cites the proven ability for AI to beat the world’s Go champions over and over and over again, and that it can find melanomas better than board-certified dermatologists.

“It won’t be long before an AI is sophisticated enough to
implement a core set of beliefs in ways that reflect changes in the world. In other words, the time is coming when AIs will have better judgment than most politicians.”

That seems like grounds to elect one as President, right? In fact, it is just another way for us to take our eye off the ball, to subordinate our autonomy to more powerful forces in the belief that technology will save us and make us happier.

Back to my previous point, that’s what is so frightening. It is precisely the kind of argument that people buy into. What if the new AI President decides that we will all be happier if we’re sedated, and then using executive powers makes it law? Forget checks and balances, since who else in government could win an argument against an all-knowing AI? How much power will the new AI President give to other algorithms, bots, and machines?

If we are willing to give up the process of purposeful work to make a living wage in exchange for a guaranteed income, to subordinate our decision-making to have “less to think about,” to abandon reality for a “good enough” simulation, and believe that this new AI will be free of the special interests who think they control it, then get ready for the future.

1. Leonhard, Gerd. Technology vs. Humanity: The Coming Clash between Man and Machine. p112, United Kingdom: Fast Future, 2016. Print.

Bookmark and Share

Disruption. Part 2.

 

Last week I discussed the idea of technological disruption. Essentially, they are innovations that make fundamental changes in the way we work or live. In turn, these changes affect culture and behavior. Issues of design and culture are the stuff that interests me and my research: how easily and quickly our practices change as a result of the way we enfold technology. The advent of the railroad, mass produced automobiles, radio, then television, the Internet, and the smartphone all qualify as disruptions.

Today, technology advances more quickly. Technological development was never a linear idea, but because most of the tech advances of the last century were at the bottom of the exponential curve, we didn’t notice them. New technologies that are under development right now are going to being realized more quickly (especially the ones with big funding), and because of the idea of convergence, (the intermixing of unrelated technologies) their consequences will be less predictable.

One of my favorite futurists is Amy Webb whom I have written about before. In her most recent newsletter, Amy reminds us that the Internet was clunky and vague long before it was disruptive. She states,

“However, our modern Internet was being built without the benefit of some vital voices: journalists, ethicists, economists, philosophers, social scientists. These outside voices would have undoubtedly warned of the probable rise of botnets, Internet trolls and Twitter diplomacy––would the architects of our modern internet have done anything differently if they’d confronted those scenarios?”

Amy inadvertently left out the design profession, though I’m sure she will reconsider after we chat. Indeed, it is the design profession that is a key contributor to transformative tech and design thinkers, along with the ethicists and economists can help to visualize and reframe future visions.

Amy thinks that voice will be the next transformation will be our voice,

“From here forward, you can be expected to talk to machines for the rest of your life.”

Amy is referring to technologies like Alexa, Siri, Google, Cortana, and something coming soon called Bixby. The voices of these technologies are, of course, only the window dressing for artificial intelligence. But she astutely points out that,

“…we also know from our existing research that humans have a few bad habits. We continue to encode bias into our algorithms. And we like to talk smack to our machines. These machines are being trained not just to listen to us, but to learn from what we’re telling them.”

Such a merger might just be the mix of any technology (name one) with human nature or the human condition: AI meets Mike who lives across the hall. AI becoming acquainted with Mike may have been inevitable, but the fact that Mike happens to be a jerk was less predictable and so the outcome less so. The most significant disruptions of the future are going to come from the convergence of seemingly unrelated technologies. Sometimes innovation depends on convergence, like building an artificial human that will have to master a lot of different functions. Other times, convergence is accidental or at least unplanned. The engineers over at Boston Dynamics who are building those intimidating walking robots are focused a narrower set of criteria than someone creating an artificial human. Perhaps power and agility are their primary concern. Then, in another lab, there are technologists working on voice stress analysis, and in another setting, researchers are looking to create an AI that can choose your wardrobe. Somewhere else we are working on facial recognition or Augmented Reality or Virtual Reality or bio-engineering, medical procedures, autonomous vehicles or autonomous weapons. So it’s a lot like Harry meets Sally, you’re not sure what you’re going to get or how it’s going to work.

Digital visionary Kevin Kelly thinks that AI will be at the core of the next industrial revolution. Place the prefix “smart” in front of anything, and you have a new application for AI: a smart car, a smart house, a smart pump. These seem like universally useful additions, so far. But now let’s add the same prefix to the jobs you and I do, like a doctor, lawyer, judge, designer, teacher, or policeman. (Here’s a possible use for that ominous walking robot.) And what happens when AI writes better code than coders and decides to rewrite itself?

Hopefully, you’re getting the picture. All of this underscores Amy Webb’s earlier concerns. The ‘journalists, ethicists, economists, philosophers, social scientists’ and designers are rarely in the labs where the future is taking place. Should we be doing something fundamentally differently in our plans for innovative futures?

Side note: Convergence can happen in a lot of ways. The parent corporation of Boston Dynamics is X. I’ll use Wikipedia’s definition of X: “X, an American semi-secret research-and-development facility founded by Google in January 2010 as Google X, operates as a subsidiary of Alphabet Inc.”

Bookmark and Share

Disruption. Part 1

 

We often associate the term disruption with a snag in our phone, internet or other infrastructure service, but there is a larger sense of the expression. Technological disruption refers the to phenomenon that occurs when innovation, “…significantly alters the way that businesses operate. A disruptive technology may force companies to alter the way that they approach their business, risk losing market share or risk becoming irrelevant.”1

Some track the idea as far back as Karl Marx who influenced economist Joseph Schumpeter to coin the term “creative destruction” in 1942.2 Schumpeter described that as the “process of industrial mutation that incessantly revolutionizes the economic structure from within, incessantly destroying the old one, incessantly creating a new one.” But it was, “Clayton M. Christensen, a Harvard Business School professor, that described it’s current framework. “…a disruptive technology is a new emerging technology that unexpectedly displaces an established one.”3

OK, so much for the history lesson. How does this affect us? Historical examples of technological disruption go back to the railroads, and the mass produced automobile, technologies that changed the world. Today we can point to the Internet as possibly this century’s most transformative technology to date. However, we can’t ignore the smartphone, barely ten years old which has brought together a host of converging technologies substantially eliminating the need for the calculator, the dictaphone, land lines, the GPS box that you used to put on your dashboard, still and video cameras, and possibly your privacy. With the proliferation of apps within the smartphone platform, there are hundreds if not thousands of other “services” that now do work that we had previously done by other means. But hold on to your hat. Technological disruption is just getting started. For the next round, we will see an increasingly pervasive Internet of Things (IoT), advanced robotics, exponential growth in Artificial Intelligence (AI) and machine learning, ubiquitous Augmented Reality (AR), Virtual Reality (VR), Blockchain systems, precise genetic engineering, and advanced renewable energy systems. Some of these such as Blockchain Systems will have potentially cataclysmic effects on business. Widespread adoption of blockchain systems that enable digital money would eliminate the need for banks, credit card companies, and currency of all forms. How’s that for disruptive? Other innovations will just continue to transform us and our behaviors. Over the next few weeks, I will discuss some of these potential disruptions and their unique characteristics.

Do you have any you would like to add?

1 http://www.investopedia.com/terms/d/disruptive-technology.asp#ixzz4ZKwSDIbm

2 http://www.investopedia.com/terms/c/creativedestruction.asp

3 http://www.intelligenthq.com/technology/12-disruptive-technologies/

See also: Disruptive technologies: Catching the wave, Journal of Product Innovation Management, Volume 13, Issue 1, 1996, Pages 75-76, ISSN 0737-6782, http://dx.doi.org/10.1016/0737-6782(96)81091-5.
(http://www.sciencedirect.com/science/article/pii/0737678296810915)

Bookmark and Share

Thought leaders and followers.

 

Next week, the World Future Society is having its annual conference. As a member, I really should be going, but I can’t make it this year. The future is a dicey place. There are people convinced that we can create a utopia, some are warning of dystopia, and the rest are settled somewhere in between. Based on promotional emails that I have received, one of the topics is “The Future of Evolution and Human Nature.” According to the promo,

“The mixed emotions and cognitive dissonance that occur inside each of us also scale upward into our social fabric: implicit bias against new perspectives, disdain for people who represent “other”, the fear of a new world that is not the same as it has always been, and the hopelessness that we cannot solve our problems. We know from experience that this negativity, hatred, fear, and hopelessness is not what it seems like on the surface: it is a reaction to change. And indeed we are experiencing a period of profound change.” There is a larger story of our evolution that extends well beyond the negativity and despair that feels so real to us today. It’s a story of redefining and building infrastructure around trust, hope and empathy. It’s a story of accelerating human imagination and leveraging it to create new and wondrous things.

It is a story of technological magic that will free us from scarcity and ensure a prosperous lifestyle for everyone, regardless of where they come from.”

Woah. I have to admit, this kind of talk that makes me uncomfortable. Are fear of a new world, negativity, hatred, and fear reactions to change? Will technosocial magic solve all our problems? This type of rhetoric sounds more like a movement than a conference that examines differing views on an important topic. It would seem to frame caution as fear and negativity, and then we throw in that hyperbole hatred. Does it sound like the beginning of an agenda with a framework that characterizes those who disagree as haters? I think it does. It’s a popular tactic.

These views do not by any means reflect the opinions of the entire WFS membership, but there is a significant contingent, such as the folks from Humanity+, which hold the belief that we can fix human evolution—even human nature—with technology. For me, this is treading into thorny territory.

What is human nature? Merriam-Webster online provides this definition:

“[…]the nature of humans; especially: the fundamental dispositions and traits of humans.” Presumably, we include good traits and bad traits. Will our discussions center on which features to fix and which to keep or enhance? Who will decide?

What about the human condition? Can we change this? Should we? According to Wikipedia,

“The human condition is “the characteristics, key events, and situations which compose the essentials of human existence, such as birth, growth, emotionality, aspiration, conflict, and mortality.” This is a very broad topic which has been and continues to be pondered and analyzed from many perspectives, including those of religion, philosophy, history, art, literature, anthropology, sociology, psychology, and biology.”

Clearly, there are a lot of different perspectives to be represented here. Do we honestly believe that technology will answer them all sufficiently? The theme of the upcoming WFS conference is “A Brighter Future IS Possible.” No doubt there will be a flurry of technosocial proposals presented there, and we should not put them aside as a bunch of fringe futurists. These voices are thought-leaders. They lead thinking. Are we thinking? Are we paying attention? If so, then it’s time to discuss and debate these issues, or others will decides without us.

Bookmark and Share

Future Shock

 

As you no doubt have heard, Alvin Toffler died on June 27, 2016, at the age of 87. Mr. Toffler was a futurist. The book for which he is best known, Future Shock was a best seller in 1970 and was considered required college reading at the time. In essence, Mr. Toffler said that the future would be a disorienting place if we just let it happen. He said we need to pay attention.

Credit: Susan Wood/Getty Images from The New York Times 2016
Credit: Susan Wood/Getty Images from The New York Times 2016

This week, The New York Times published an article entitled Why We Need to Pick Up Alvin Toffler’s Torch by Farhad Manjoo. As Manjoo observes, at one time (the 1950s, 1960s, and 1970s), the study of foresight and forecasting was important stuff that governments and corporations took seriously. Though I’m not sure I agree with Manjoo’s assessment of why that is no longer the case, I do agree that it is no longer the case.

“In many large ways, it’s almost as if we have collectively stopped planning for the future. Instead, we all just sort of bounce along in the present, caught in the headlights of a tomorrow pushed by a few large corporations and shaped by the inescapable logic of hyper-efficiency — a future heading straight for us. It’s not just future shock; we now have future blindness.”

At one time, this was required reading.
At one time, this was required reading.

When I attended the First International Conference on Anticipation in 2015, I was pleased to discover that the blindness was not everywhere. In fact, many of the people deeply rooted in the latest innovations in science and technology, architecture, social science, medicine, and a hundred other fields are very interested in the future. They see an urgency. But most governments don’t and I fear that most corporations, even the tech giants are more interested in being first with the next zillion-dollar technology than asking if that technology is the right thing to do. Even less they are asking what repercussions might flow from these advancements and what are the ramifications of today’s decision making. We just don’t think that way.

I don’t believe that has to be the case. The World Future Society for example at their upcoming conference in Washington, DC will be addressing the idea of futures studies as a requirement for high school education. They ask,

“Isn’t it surprising that mainstream education offers so little teaching on foresight? Were you exposed to futures thinking when you were in high school or college? Are your children or grandchildren taught how decisions can be made using scenario planning, for example? Or take part in discussions about what alternative futures might look like? In a complex, uncertain world, what more might higher education do to promote a Futurist Mindset?”

It certainly needs to be part of design education, and it is one of the things I vigorously promote at my university.

As Manjoo sums up in his NYT article,

“Of course, the future doesn’t stop coming just because you stop planning for it. Technological change has only sped up since the 1990s. Notwithstanding questions about its impact on the economy, there seems no debate that advances in hardware, software and biomedicine have led to seismic changes in how most of the world lives and works — and will continue to do so.

Yet, without soliciting advice from a class of professionals charged with thinking systematically about the future, we risk rushing into tomorrow headlong, without a plan.”

And if that isn’t just crazy, at the very least it’s dangerous.

 

 

Bookmark and Share

Logical succession, the final installment.

For the past couple of weeks, I have been discussing the idea posited by Ray Kurzweil, that we will have linked our neocortex to the Cloud by 2030. That’s less than 15 years, so I have been asking how that could come to pass with so many technological obstacles in the way. When you make a prediction of that sort, I believe you need a bit more than faith in the exponential curve of “accelerating returns.”

This week I’m not going to take issue with an enormous leap forward in the nanobot technology to accomplish such a feat. Nor am I going to question the vastly complicated tasks of connecting to the neocortex and extracting anything coherent, but also assembling memories, and consciousness and in turn, beaming it to the Cloud. Instead, I’m going to pose the question of, “Why we would want to do this in the first place?”

According to Kurzweil, in a talk last year at Singularity University,

“We’re going to be funnier. We’re going to be sexier. We’re going to be better at expressing loving sentiment…” 1

Another brilliant futurist, and friend of Ray, Peter Diamandis includes these additional benefits:

• Brain to Brain Communication – aka Telepathy
• Instant Knowledge – download anything, complex math, how to fly a plane, or speak another language
• Access More Powerful Computing – through the Cloud
• Tap Into Any Virtual World – no visor, no controls. Your neocortex thinks you are there.
• And more, including and extended immune system, expandable and searchable memories, and “higher-order existence.”2

As Kurzweil explains,

“So as we evolve, we become closer to God. Evolution is a spiritual process. There is beauty and love and creativity and intelligence in the world — it all comes from the neocortex. So we’re going to expand the brain’s neocortex and become more godlike.”1

The future sounds quite remarkable. My issue lies with Koestler’s “ghost in the machine,” or what I call humankind’s uncanny ability to foul things up. Diamandis’ list could easily spin this way:

  • Brain-To-Brain hacking – reading others thoughts
  • Instant Knowledge – to deceive, to steal, to subvert, or hijack.
  • Access to More Powerful Computing – to gain the advantage or any of the previous list.
  • Tap Into Any Virtual World – experience the criminal, the evil, the debauched and not go to jail for it.

You get the idea. Diamandis concludes, “If this future becomes reality, connected humans are going to change everything. We need to discuss the implications in order to make the right decisions now so that we are prepared for the future.”

Nevertheless, we race forward. We discovered this week that “A British researcher has received permission to use a powerful new genome-editing technique on human embryos, even though researchers throughout the world are observing a voluntary moratorium on making changes to DNA that could be passed down to subsequent generations.”3 That would be CrisprCas9.

It was way back in 1968 that Stewart Brand introduced The Whole Earth Catalog with, “We are as gods and might as well get good at it.”

Which lab is working on that?

 

1. http://www.huffingtonpost.com/entry/ray-kurzweil-nanobots-brain-godlike_us_560555a0e4b0af3706dbe1e2
2. http://singularityhub.com/2015/10/12/ray-kurzweils-wildest-prediction-nanobots-will-plug-our-brains-into-the-web-by-the-2030s/
3. http://www.nytimes.com/2016/02/02/health/crispr-gene-editing-human-embryos-kathy-niakan-britain.html?_r=0
Bookmark and Share

Logical succession, Part 2.

Last week the topic was Ray Kurzweil’s prediction that by 2030, not only would we send nanobots into our bloodstream by way of the capillaries, but they would target the neocortex, set up shop, connect to our brains and beam our thoughts and other contents into the Cloud (somewhere). Kurzweil is no crackpot. He is a brilliant scientist, inventor and futurist with an 86 percent accuracy rate on his predictions. Nevertheless, and perhaps presumptuously, I took issue with his prediction, but only because there was an absence of a logical succession. According to Coates,

“…the single most important way in which one comes to an understanding of the future, whether that is working alone, in a team, or drawing on other people… is through plausible reasoning, that is, putting together what you know to create a path leading to one or several new states or conditions, at a distance in time” (Coates 2010, p. 1436).1

Kurzweil’s argument is based heavily on his Law of Accelerating Returns that says (essentially), “We won’t experience 100 years of progress in the 21st century it will be more like 20,000 years of progress (at today’s rate).” The rest, in the absence of more detail, must be based on faith. Faith, perhaps in the fact that we are making considerable progress in architecting nanobots or that we see promising breakthroughs in mind-to-computer communication. But what seems to be missing is the connection part. Not so much connecting to the brain, but beaming the contents somewhere. Another question, why, also comes to mind, but I’ll get to that later.

There is something about all of this technological optimism that furrows my brow. A recent article in WIRED helped me to articulate this skepticism. The rather lengthy article chronicled the story of neurologist Phil Kennedy, who like Kurzweil believes that the day is soon approaching when we will connect or transfer our brains to other things. I can’t help but call to mind what one time Fed manager Alan Greenspan called, “irrational exuberance.” The WIRED article tells of how Kennedy nearly lost his mind by experimenting on himself (including rogue brain surgery in Belize) to implant a host of hardware that would transmit his thoughts. This highly invasive method, the article says is going out of style, but the promise seems to be the same for both scientists: our brains will be infinitely more powerful than they are today.

Writing in WIRED columnist Daniel Engber makes an astute statement. During an interview with Dr. Kennedy, they attempted to watch a DVD of Kennedy’s Belize brain surgery. The DVD player and laptop choked for some reason and after repeated attempts they were able to view Dr. Kennedy’s naked brain undergoing surgery. Reflecting on the mundane struggles with technology that preceded the movie, Engber notes, “It seems like technology always finds new and better ways to disappoint us, even as it grows more advanced every year.”

Dr. Kennedy’s saga was all about getting thoughts into text, or even synthetic speech. Today, the invasive method of sticking electrodes into your cerebral putty has been replaced by a kind of electrode mesh that lays on top of the cortex underneath the skull. They call this less invasive. Researchers have managed to get some results from this, albeit snippets with numerous inaccuracies. They say it will be decades, and one of them points out that even Siri still gets it wrong more than 30 years after the debut of speech recognition technology.
So, then it must be Kurzweil’s exponential law that still provides near-term hope for these scientists. As I often quote Tobias Revell, “Someone somewhere in a lab is playing with your future.”

There remain a few more nagging questions for me. What is so feeble about our brains that we need them to be infinitely more powerful? When is enough, enough? And, what could possibly go wrong with this scenario?

Next week.

 

1. Coates, J.F., 2010. The future of foresight—A US perspective. Technological Forecasting & Social Change 77, 1428–1437.
Bookmark and Share