Tag Archives: ethics

Corporate Sci-Fi.

Note: Also published on LinkedIn

 

Why your company needs to play in the future.

As a professor of design and a design fiction researcher, I write academic papers and blog weekly about the future. I teach about the future of design, and I create future scenarios, sometimes with my students, that provoke us to look at what we are doing, what we are making, why we are making it and the ramifications that are inevitable. Primarily I try to focus both designers and decision makers on the steps they can take today to keep from being blindsided tomorrow. Futurists seem to be all the rage these days telling us to prepare for the Singularity, autonomous everything, or that robots will take our jobs. Recently, Jennifer Doudna, co-inventor of the gene editing technique called CrisprCas9 has been making the rounds and sounding the alarm that technology is moving so fast that we aren’t going to be able to contain a host of unforeseen (and foreseen) circumstances inside Pandora’s box. This concern should be prevalent, however, beyond just the bioengineering fields and extend into virtually anywhere that technology is racing forward fueled by venture capital and the desperate need to stay on top of whatever space in which we are playing. There is a lot at stake. Technology has already redefined privacy, behavioral wellness, personal autonomy, healthcare, labor, and maybe even our humanness, just to name a few.

Several recent articles have highlighted the changing world of design and how the pressure is on designers to make user adoption more like user addiction to ensure the success of a product or app. The world of behavioral economics is becoming a new arena in which we are using algorithms to manipulate users. Some designers are passing the buck to the clients or corporations that employ them for the questionable ethics of addictive products; others feel compelled to step aside and work on less lucrative projects or apply their skills to social causes. Most really care and want to help. But designers are uniquely positioned and trained to tackle these wicked problems—if we would collaborate with them.

Beyond the companies that might be deliberately trying to manipulate us, are those that unknowingly, or at least unintentionally, transform our behaviors in ways that are potentially harmful. Traditionally, we seek to hold someone responsible when a product or service is faulty, the physician for malpractice, the designer or manufacturer when a toy causes injury, a garment falls apart, or an appliance self-destructs. But as we move toward systemic designs that are less physical and more emotional, behavioral, or biological, design faults may not be so easy to identify and their repercussions noticeable only after serious issues have arisen. In fact, we launch many of the apps and operating systems used today with admitted errors and bugs. Designers rely on real-life testing to identify problems, issue patches, revisions, and versions.

In the realm of nanotechnology, while scientists and thought leaders have proposed guidelines and best-practices, research and development teams in labs around the world race forward without regulation creating molecule-sized structures, machines, and substances with no idea whether they are safe or what might be long-term effects of exposure to these elements. In biotechnology, while folks like Jennifer Doudna appeal to a morally ethical cadre of researchers to tread carefully in the realm of genetic engineering (especially when it comes to inheritable gene manipulation) we do not universally share those morals and ethics. Recent headlines attest to the fact that some scientists are bent on moving forward regardless of the implications.

Some technologies such as our smartphones have become equally invasive technology, yet they are now considered mundane. In just ten years since the introduction of the iPhone, we have transformed behaviors, upended our modes of communication, redefined privacy, distracted our attentions, distorted reality and manipulated a predicted 2.3 billion users as of 2017. [1] It is worth contemplating that this disruption is not from a faulty product, but rather one that can only be considered wildly successful.

There are a plethora of additional technologies that are poised to refine our worlds yet again including artificial intelligence, ubiquitous surveillance, human augmentation, robotics, virtual, augmented and mixed reality and the pervasive Internet of Things. Many of these technologies make their way into our experiences through the promise of better living, medical breakthroughs, or a safer and more secure life. But too often we ignore the potential downsides, the unintended consequences, or the systemic ripple-effects that these technologies spawn. Why?

In many cases, we do not want to stand in the way of progress. In others, we believe that the benefits outweigh the disadvantages, yet this is the same thinking that has spawned some of our most complex and daunting systems, from nuclear weapons to air travel and the internal combustion engine. Each of these began with the best of intentions and, in many ways were as successful and initially beneficial as they could be. At the same time, they advanced and proliferated far more rapidly than we were prepared to accommodate. Dirty bombs are a reality we did not expect. The alluring efficiency with which we can fly from one city to another has nevertheless spawned a gnarly network of air traffic, baggage logistics, and anti-terrorism measures that are arguably more elaborate than getting an aircraft off the ground. Traffic, freeways, infrastructure, safety, and the drain on natural resources are complexities never imagined with the revolution of personal transportation. We didn’t see the entailments of success.

This is not always true. There have often been scientists and thought leaders who were waving the yellow flag of caution. I have written about how, “back in 1975, scientists and researchers got together at Asilomar because they saw the handwriting on the wall. They drew up a set of resolutions to make sure that one day the promise of Bioengineering (still a glimmer in their eyes) would not get out of hand.”[2] Indeed, researchers like Jennifer Doudna continue to carry the banner. A similar conference took place earlier this year to alert us to the potential dangers of technology and earlier this year another to put forth recommendations and guidelines to ensure that when machines are smarter than we are they carry on in a beneficent role. Too often, however, it is the scientists and visionaries who attend these conferences. [3] Noticeably absent, though not always, is corporate leadership.

Nevertheless, in this country, there remains no safeguarding regulation for nanotech, nor bioengineering, nor AI research. It is a free-for-all, and all of which could have massive disruption not only to our lifestyles but also our culture, our behavior, and our humanness. Who is responsible?

For nearly 40 years there has been an environmental movement that has spread globally. Good stewardship is a good idea. But it wasn’t until most corporations saw a way for it to make economic sense that they began to focus on it and then promote it as their contribution to society, their responsibility, and their civic duty. As well intentioned as they may be (and many are) much more are not paying attention to the effect of their technological achievements on our human condition.

We design most technologies with a combination of perceived user need and commercial potential. In many cases, these are coupled with more altruistic motivations such as a “do no harm” commitment to the environment and fair labor practices. As we move toward the capability to change ourselves in fundamental ways, are we also giving significant thought to the behaviors that we will engender by such innovations, or the resulting implications for society, culture, and the interconnectedness of everything?

Enter Humane Technology

Ultimately we will have to demand this level of thought, beginning with ourselves. But we should not fight this alone. Corporations concerned with appearing sensitive and proactive toward the environment and social justice need to add a new pillar to their edifice as responsible global citizens: humane technology.

Humane technology considers the socio-behavioral ramifications of products and services: digital dependencies, and addictions, job loss, genetic repercussions, the human impact from nanotechnologies, AI, and the Internet of Things.

To whom do we turn when a 14-year-old becomes addicted to her smartphone or obsessed with her social media popularity? We could condemn the parents for lack of supervision, but many of them are equally distracted. Who is responsible for the misuse of a drone to vandalize property or fire a gun or the anticipated 1 billion drones flying around by 2030? [4] Who will answer for the repercussions of artificial intelligence that spouts hate speech? Where will the buck stop when genetic profiling becomes a requirement for getting insured or getting a job?

While the backlash against these types of unintended consequences or unforeseen circumstances are not yet widespread and citizens have not taken to the streets in mass protests, behavioral and social changes like these may be imminent as a result of dozens of transformational technologies currently under development in labs and R&D departments across the globe. Who is looking at the unforeseen or the unintended? Who is paying attention and who is turning a blind eye?

It was possible to have anticipated texting and driving. It is possible to anticipate a host of horrific side effects from nanotechnology to both humans and the environment. It’s possible to tag the ever-present bad actor to any number of new technologies. It is possible to identify when the race to master artificial intelligence may be coming at the expense of making it safe or drawing the line. In fact, it is a marketing opportunity for corporate interests to take the lead and the leverage their efforts to preempt adverse side effects as a distinctive advantage.

Emphasizing humane technology is an automatic benefit for an ethical company, and for those more concerned with profit than ethics, (just between you and me) it offers the opportunity for a better brand image and (at least) the appearance of social concern. Whatever the motivation, we are looking at a future where we are either prepared for what happens next, or we are caught napping.

This responsibility should start with anticipatory methodologies that examine the social, cultural and behavioral ramifications, and unintended consequences of what we create. Designers and those trained in design research are excellent collaborators. My brand of design fiction is intended to take us into the future in an immersive and visceral way to provoke the necessary discussion and debate that anticipate the storm should there be one, but promising utopia is rarely the tinder to fuel a provocation. Design fiction embraces the art critical thinking and thought problems as a means of anticipating conflict and complexity before these become problems to be solved.

Ultimately we have to depart from the idea that technology will be the magic pill to solve the ills of humanity, design fiction, and other anticipatory methodologies can help to acknowledge our humanness and our propensity to foul things up. If we do not self-regulate, regulation will inevitably follow, probably spurred on by some unspeakable tragedy. There is an opportunity, now for the corporation to step up to the future with a responsible, thoughtful compassion for our humanity.

 

 

1. https://www.statista.com/statistics/330695/number-of-smartphone-users-worldwide/

2. http://theenvisionist.com/2017/08/04/now-2/

3. http://theenvisionist.com/2017/03/24/genius-panel-concerned/

4. http://www.abc.net.au/news/2017-08-31/world-of-drones-congress-brisbane-futurist-thomas-frey/8859008

Bookmark and Share

How should we talk about the future?

 

Imagine that there are two camps. One camp holds high confidence that the future will be manifestly bright and promising in all aspects of human endeavor. Our health will dramatically improve as we eradicate disease and possibly even death. Artificial Intelligence will be at our beck and call to make our tough decisions, order our lives, fight our wars, watch over us, and keep us safe. Hence, it is full speed ahead. The positives outweigh the negatives. Any missteps will be but a minor hiccup, and we’ll cross those bridges when we come to them.

The second camp believes that many of these promises are achievable. But they also believe that we are beginning to see strong evidence that technology is indeed moving exponentially and that we are at a trajectory point in the curve that where will see what many experts have categorized as impossible or a “long way off” now is knocking at our door.

Kurzweil’s Law of Accelerating Returns, is proving remarkably accurate. Sure we adapted from the horse and buggy to the automobile, and from there to air travel, to an irritatingly resilient nuclear threat, to computers, and smartphones and DNA sequencing. But these changes are arriving more rapidly than their predecessors.

“‘As exponential growth continues to accelerate into the first half of the twenty-first century,’ [Kurzweil] writes. ‘It will appear to explode into infinity, at least from the limited and linear perspective of contemporary humans.’”1

The second camp sees this rapid-fire proliferation as alarming. Not because we will get to utopia faster, but because we will be standing in the midst of a host of disruptive technologies all coming to fruition at the same time without the benefit of meaningful oversight or the engagement of our societies.

I am in the second camp.

Last week, I talked about genetic engineering. The designer-baby question was always pushed aside as a long way off. Not anymore. That’s just one change. Our privacy, in the form of “big data,” from seemingly innocent pastimes such as Facebook, is being severely compromised. According to security technologist Bruce Schneier,

“Facebook can predict race, personality, sexual orientation, political ideology, relationship status, and drug use on the basis of Like clicks alone. The company knows you’re engaged before you announce it, and gay before you come out—and its postings may reveal that to other people without your knowledge or permission. Depending on the country you live in, that could merely be a major personal embarrassment—or it could get you killed.”

Facebook is just one of the seemingly benign things we do every day. By now, most of us consider that using our smartphones 75 percent of our day is also harmless, though we would also have to agree that it has changed us personally, behaviorally, and societally. And while the societal outcry against designer babies has been noticeable since last weeks stories about CrisprCas9 gene splicing with human embryos, how long will it be before we accept it as the norm, and feel pressure in our own families to participate to stay competitive, or maybe even just to be insured.

The fact is that we like to think that we can adapt to anything. To some extent, we pride ourselves on this resilience. Unfortunately, that seems to suggest that we are also powerless to affect these technologies and that we have no say in when, if, or whether we should make them in the first place. Should we be proud of the fact that we are adapting to a complete lack of privacy, to the likelihood of terrorism or being replaced by an AI? These are my questions.

So I am encouraged when others also raise these questions. Recently, the tech media which seems to be perpetually enamored of folks like Mark Zuckerberg and Elon Musk, called Zuckerberg a “bad futurist” because of his over optimistic view of the future.

The article came from the Huffington post’s Rebecca Searles.
According to Searles,

“Elon Musk’s doomsday AI predictions aren’t “irresponsible,” but Mark Zuckerberg’s techno-optimism is.”3

According to a Zuckerberg podcast,

“…people who are arguing for slowing down the process of
building AI, I just find that really questionable… If you’re arguing against AI, then you’re arguing against safer cars that aren’t going to have accidents and you’re arguing against being able to better diagnose people when they’re sick.”3

Technology hawks are always promising safer, and healthier as their rationale for unimpeded acceleration. I’m sure that’s the rah-rah rationale for designer babies, too. Think of all the illnesses we will be able to breed out of the human race. Searles and I agree that negative outcomes deserve equally serious consideration as well, and not after they happen. As she aptly puts it,

“Tackling tech challenges with a build-it-and-see-what-happens approach (a la Zuckerberg’s former “move fast and break things” development mantra) just isn’t suitable for AI.”

The problem is, that Zuckerberg is not alone, nor is last weeks
Shoukhrat Mitalipov. Ultimately, this reality of two camps is the rationale behind my approach to design fiction. As you know, the objective of design fiction is to provoke. Promising utopia is rarely the tinder to fuel a provocation.

Let’s remember Charles Dickens’ story of Ebenezer Scrooge. The ghost of Christmas past takes him back in time where, for the first time, he sees the truth about his past. But this revelation does not change him. Then the ghost of Christmas present opens his eyes to everything around him that he is blind to in the present. Still, Scrooge is unaffected. And finally, the ghost of Christmas future takes him into the future, and it is here that Scrooge sees the days to come as “the way it will be” unless he changes something now.

Somehow, I think the outcome would have been different if that last ghost said, ”Don’t worry. You’ll adapt.”

Let’s not talk about the future in purely utopian terms nor total doom-and-gloom. The future will not be like one or the other any more than is the present day. But let us not be blind to our infinite capacity to foul things up, to the potential of bad actors or the inevitability of unanticipated consequences. If we have any hope of meeting our future with the altruistic image of a utopian society, let us go forward with eyes open.

 

1. http://www.businessinsider.com/ray-kurzweil-law-of-accelerating-returns-2015-5

2. “Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World”

3. http://www.huffingtonpost.com/entry/mark-zuckerberg-is-a-bad-futurist_us_5979295ae4b09982b73761f0

Bookmark and Share

What now?

 

If you follow this blog, you know that I like to say that the rationale behind design fiction—provocations that get us to think about the future—is to ask, “What if?” now so that we don’t have to ask “What now?”, then. This is especially important as our technologies begin to meddle with the primal forces of nature, where we naively anoint ourselves as gods and blithely march forward—because we can.

The CRISPR-Cas9 technology caught my eye almost exactly two years ago from today through a WIRED article by Amy Maxmen. Then I wrote about it, as an awesomely powerful tool for astounding progress for the good of humanity while at the same time taking us down a slippery slope. A Maxmen stated,

“It could, at last, allow genetics researchers to conjure everything anyone has ever worried they would—designer babies, invasive mutants, species-specific bioweapons, and a dozen other apocalyptic sci-fi tropes.”

The article chronicles how, back in 1975, scientists and researchers got together at Asilomar because they saw the handwriting on the wall. They drew up a set of resolutions to make sure that one day the promise of Bioengineering (still a glimmer in their eyes) would not get out of hand.

43 years later, what was only a glimmer was now a reality. So, in 2015, some of these researchers came together again to discuss the implications of a new technique called CRISPR-Cas9. It was just a few years after Jennifer Doudna and Emmanuelle Charpentier figured out the elegant tool for genome editing. Again from Maxmen,

“On June 28, 2012, Doudna’s team published its results in Science. In the paper and an earlier corresponding patent application, they suggest their technology could be a tool for genome engineering. It was elegant and cheap. A grad student could do it.”

In 2015 it was Doudna herself that called the meeting, this time in Napa, to discuss the ethical ramifications of Crispr. Their biggest concern was what they call germline modifications—the stuff that gets passed on from generation to generation, substantially changing the human forever. In September of 2015, Doudna gave a TED Talk asking the asks the scientific community to pause and discuss the ethics of this new tool before rushing in. On the heels of that, the US National Academy of Sciences said it would work on a set of ”recommendations“ for researchers and scientists to follow. No laws, just recommendations.

Fast forward to July 26, 2017. MIT Technology Review reported:

“The first known attempt at creating genetically modified human embryos in the United States has been carried out by a team of researchers in Portland, Oregon… Although none of the embryos were allowed to develop for more than a few days—and there was never any intention of implanting them into a womb—the experiments are a milestone on what may prove to be an inevitable journey toward the birth of the first genetically modified humans.”

MIT’s article was thin on details because the actual paper that delineated the experiment was not yet published. Then, this week, it was. This time it was, indeed, a germline objective.

“…because any genetically modified child would then pass the changes on to subsequent generations via their own germ cells—the egg and sperm.”(ibid).

All this was led by fringe researcher Shoukhrat Mitalipov of Oregon Health and Science University, and WIRED was quick to provide more info, but in two different articles.

The first of these stories appeared last Friday and gave more specifics on Mitalipov than the actual experiment.

“the same guy who first cloned embryonic stem cells in humans. And came up with three-parent in-vitro fertilization. And moved his research on replacing defective mitochondria in human eggs to China when the NIH declined to fund his work. Throughout his career, Mitalipov has gleefully played the role of mad scientist, courting controversy all along the way (sic).”

In the second article, we discover what the mad scientist was trying to do. In essence, Mitalipov demonstrated a highly efficient replacement of mutated genes like MYBPC3, which is responsible for a heart condition called “hypertrophic cardiomyopathy that affects one in 500 people—the most common cause of sudden death among young athletes.” Highly efficient means that in 42 out of 58 attempts, the problem gene was removed and replaced with a normal one. Mitalipov believes that he can get this to 100%. This means that fixing genetic mutations can be done successfully and maybe even become routine in the near future. But WIRED points out that

“would require lengthy clinical trials—something a rider in the current Congressional Appropriations Act has explicitly forbidden the Food and Drug Administration from even considering.”

Ah, but this is not a problem for our fringe mad scientist.

“Mitalipov said he’d have no problem going elsewhere to run the tests, as he did previously with his three-person IVF work.”

Do w see a pattern here? One surprising thing that the study revealed was that,

“Of the 42 successfully corrected embryos, only one of them used the supplied template to make a normal strand of DNA. When Crispr cut out the paternal copy—the mutant one—it left behind a gap, ready to be rebuilt by the cell’s repair machinery. But instead of grabbing the normal template DNA that had been injected with the sperm and Crispr protein, 41 embryos borrowed the normal maternal copy of MYBPC3 to rebuild its gene.”

In other words, the cell said, thanks for your stinking code but we’ll handle this. It appears as though cellular repair may have a mission plan of its own. That’s the mysterious part that reminds us that there is still something miraculous going on here behind the scenes. Mitalipov thinks he and his team can force these arrogant cells to follow instructions.

So what now? With this we have more evidence that guidelines and recommendations, clear heads and cautionary voices are not enough to stop scientists and researchers on the fringe, governments with dubious ethics, or whoever else might want to give things a whirl.

That puts noble efforts like Asilomar in 1975, a similar conference some years ago on nanotechnology, and one earlier this year on Artificial Intelligence as simply that, noble efforts. Why do these conference occur in the first place? Because scientists are genuinely worried that we’re going to extinct ourselves if we aren’t careful. But technology is racing down the autobahn, folks and we can’t expect the people who stand to become billionaires from their discoveries to be the same people policing their actions.

And this is only one of the many transformative technologies that are looming on the horizon. While everyone is squawking about the Paris Accords, why don’t we marshall some of our righteous indignation and pull the world together to agree on some meaningful oversight of these technologies?

We’ve gone from “What if?” to  “What now?” Are we going to avoid, “Oh, shit!”

  1. https://www.wired.com/2015/07/crispr-dna-editing-2/?mbid=nl_72815

2. http://wp.me/p7yvqL-mt

3. https://www.technologyreview.com/s/608350/first-human-embryos-edited-in-us/?set=608342

4. https://www.wired.com/story/scientists-crispr-the-first-human-embryos-in-the-us-maybe/?mbid=social_twitter_onsiteshare

5. https://www.wired.com/story/first-us-crispr-edited-embryos-suggest-superbabies-wont-come-easy/?mbid=nl_8217_p9&CNDID=49614846

Bookmark and Share

Watching and listening.

 

Pay no attention to Alexa, she’s an AI.

There was a flurry of reports from dozens of news sources (including CNN) last week that an Amazon Echo, (Alexa), called the police during a New Mexico incident of domestic violence. The alleged call began a SWAT standoff, and the victim’s boyfriend was eventually arrested. Interesting story, but after a fact-check, that could not be what happened. Several sources including the New York Times and WIRED debunked the story with details on how Alexa calling 911 is technologically impossible, at least for now. And although the Bernalillo, New Mexico County Sheriff’s Department swears to it, according to WIRED,

“Someone called the police that day. It just wasn’t Alexa..”

Even Amazon agrees from a spokesperson email,

“The receiving end would also need to have an Echo device or the Alexa app connected to Wi-Fi or mobile data, and they would need to have Alexa calling/messaging set up,”1

So it didn’t happen, but most agree, while it may be technologically impossible today, it probably won’t be for very long. The provocative side of the WIRED article proposed this thought:

“The Bernalillo County incident almost certainly had nothing to do with Alexa. But it presents an opportunity to think about issues and abilities that will become real sooner than you might think.”

On the upside, some see benefits from the ability of Alexa to intervene in a domestic dispute that could turn lethal, but they fear something called “false positives.” Could an off handed comment prompt Alexa to make a call to the police? And if it did would you feel as though Alexa had overstepped her bounds?

Others see the potential in suicide prevention. Alexa could calm you down or make suggestions for ways to move beyond the urge to die.

But as we contemplate opening this door, we need to acknowledge that we’re letting these devices listen to us 24/7 and giving them the permission to make decisions on our behalf whether we want them to or not. The WIRED article also included a comment from Evan Selinger of RIT (whom I’ve quoted before).

“Cyberservants will exhibit mission creep over time. They’ll take on more and more functions. And they’ll habituate us to become increasingly comfortable with always-on environments listening to our intimate spaces.”

These technologies start out as warm and fuzzy (see the video below) but as they become part of our lives, they can change us and not always for the good. This idea is something I contemplated a couple of years ago with my Ubiquitous Surveillance future. In this case, the invasion was not as a listening device but with a camera (already part of Amazon’s Echo Look). You can check that out and do your own provocation by visiting the link.

I’m glad that there are people like Susan Liautaud (who I wrote about last week) and Evan Selinger who are thinking about the effects of technology on society, but I still fear most of us take the stance of Dan Reidenberg, who is also quoted in the WIRED piece.

“‘I don’t think we can avoid this. This is where it is going to go. It is really about us adapting to that,” he says.’”

 

Nonsense! That’s like getting in the car with a drunk driver and then doing your best to adapt. Nobody is putting a gun to your head to get into the car. There are decisions to be made here, and they don’t have to be made after the technology has created seemingly insurmountable problems or intrusions in our lives. The companies that make them should be having these discussions now, and we should be invited to share our opinions.

What do you think?

 

  1. http://wccftech.com/alexa-echo-calling-911/
Bookmark and Share

Ethical tech.

Though I tinge most of my blogs with ethical questions, the last time I brought up this topic specifically on this was back in 2015. I guess I am ready to give it another go. Ethics is a tough topic. If we deal with this purely superficially, ethics would seem natural, like common sense, or the right thing to do. But if that’s the case, why do so many people do the wrong thing? Things get even more complicated if we move into institutionally complex issues like banking, or governing, technology, genetics, health care or national defense, just to name a few.

The last time I wrote about this, I highlighted Michael Sandel Professor of Philosophy and Government at Harvard’s Law School, where he teaches a wildly popular course called “Justice.” Then, I was glad to see that the big questions were still being addressed in in places like Harvard. Some of his questions then, which came from a FastCo article, were:

“Is it right to take from the rich and give to the poor? Is it right to legislate personal safety? Can torture ever be justified? Should we try to live forever? Buy our way to the head of the line? Create perfect children?”

These are undoubtedly important and prescient questions to ask, especially as we are beginning to confront technologies that make things which were formerly inconceivable or plain impossible, not only possible but likely.

So I was pleased to see last month, an op-ed piece in WIRED by Susan Liautaud founder of The Ethics Incubator. Susan is about as closely aligned to my tech concerns as anyone I have read. And she brings solid thinking to the issues.

“Technology is approaching the man-machine and man-animal
boundaries. And with this, society may be leaping into humanity defining innovation without the equivalent of a constitutional convention to decide who should have the authority to decide whether, when, and how these innovations are released into society. What are the ethical ramifications? What checks and balances might be important?”

Her comments are right in line with my research and co-research into Humane Technologies. Liataud continues:

“Increasingly, the people and companies with the technological or scientific ability to create new products or innovations are de facto making policy decisions that affect human safety and society. But these decisions are often based on the creator’s intent for the product, and they don’t always take into account its potential risks and unforeseen uses. What if gene-editing is diverted for terrorist ends? What if human-pig chimeras mate? What if citizens prefer to see birds rather than flying cars when they look out a window? (Apparently, this is a real risk. Uber plans to offer flight-hailing apps by 2020.) What if Echo Look leads to mental health issues for teenagers? Who bears responsibility for the consequences?”

For me, the answer to that last question is all of us. We should not rely on business and industry to make these decisions, nor expect our government to do it. We have to become involved in these issues at the public level.

Michael Sandel believes that the public is hungry for these issues, but we tend to shy away from them. They can be confrontational and divisive, and no one wants to make waves or be politically incorrect. That’s a mistake.

An image from the future. A student design fiction project that examined ubiquitous AR.

So while the last thing I want is a politician or CEO making these decisions, these two constituencies could do the responsible thing and create forums for these discussions so that the public can weigh in on them. To do anything less, borders on arrogance.

Ultimately we will have to demand this level of thought, beginning with ourselves. This responsibility should start with anticipatory methodologies that examine the social, cultural and behavioral ramifications, and unintended consequences of what we create.

But we should not fight this alone. Corporations and governments concerned with appearing sensitive and proactive toward the environment and social justice need to add a new pillar to their edifice as responsible global citizens: humane technology.

 

Bookmark and Share

An example of impending convergence.

 

The IBM Research Alliance and partners have announced this week that they have developed “…an industry-first process to build silicon nanosheet transistors that will enable 5 nanometer (nm) chips – achieving a scale of 30 billion switches on a fingernail-sized chip that will deliver significant power and performance enhancements over today’s state-of-the-art 10nm chips.”

Silicon nanosheet transistors at 5nm

Along with this new development there, of course, come promises that the technology

“…can deliver 40 percent performance enhancement at fixed power, or 75 percent power savings at matched performance. This improvement enables a significant boost to meeting the future demands of artificial intelligence (AI) systems, virtual reality and mobile devices.”

That’s a lot of tech-speech, but essentially it means your computing will happen faster, your devices will be more powerful and use less battery life.

In a previous blog, I discussed the nanometer idea.

“A nanometer is very small. Nanotech concerns itself with creations that exist in the 100nm range and below, roughly 7,500 times smaller than a human hair. In the Moore’s Law race, nanothings are the next frontier in cramming data onto a computer chip, or implanting them into our brains or living cells.”

Right now, IBM and their partners see this new development as a big plus to the future of their cognitive systems. What are cognitive systems?

IBM can answer that:

“Humans are on the cusp of augmenting their lives in extraordinary ways with AI. At IBM Research Labs around the globe, we envision and develop next-generation systems that work side-by-side with humans, accelerating our ability to create, learn, make decisions and think. We also architect the future of Watson, which has evolved from an IBM Research project to the world’s first and most-advanced AI platform.”

So it’s Watson and lots of other AI that may see the biggest benefits as a result of this new tech. With smaller, faster, more efficient chips AI can live a more robust life inside your phone or another device. But thinking phone is probably thinking way too big. Think of something much smaller but just as powerful.

Of course, every new technology comes with promises.

“Whether exploring new technical capabilities, collaborating on ethical practices or applying Watson technology to cancer research, financial decision-making, oil exploration or educational toys, IBM Research is shaping the future of AI.”

It’s all about AI and how we can augment “our lives in extraordinary ways.” Assuming that everyone plays nice, this is another example of technology poised for great things for humankind. Undoubtedly, micro-sized AI can be used for all sorts of nefarious purposes so let’s hope that the “ethical practices” part of their research is getting equal weight.

The question we have yet to ask is whether a faster, smaller, more powerful, all-knowing, steadily accelerating AI is something we truly need. This is a debate worth having. In the meantime, a 5 nm chip breakthrough is an excellent example of how a new, breakthrough technology awaits application by others for a myriad of purposes, advancing them all, in particular ways, by leaps and bounds. Who are these others? And what will they do next?

The right thing to do. Remember that idea?

Bookmark and Share

Of autonomous machines.

 

Last week we talked about how converging technologies can sometimes yield unpredictable results. One of the most influential players in the development of new technology is DARPA and the defense industry. There is a lot of technological convergence going on in the world of defense. Let’s combine robotics, artificial intelligence, machine learning, bio-engineering, ubiquitous surveillance, social media, and predictive algorithms for starters. All of these technologies are advancing at an exponential pace. It’s difficult to take a snapshot of any one of them at a moment in time and predict where they might be tomorrow. When you start blending them the possibilities become downright chaotic. With each step, it is prudent to ask if there is any meaningful review. What are the ramifications for error as well as success? What are the possibilities for misuse? Who is minding the store? We can hope that there are answers to these questions that go beyond platitudes like, “Don’t stand in the way of progress.”, “Time is of the essence.”, or “We’ll cross that bridge when we come to it.”

No comment.

I bring this up after having seen some unclassified documents on Human Systems, and Autonomous Defense Systems (AKA autonomous weapons). (See a previous blog on this topic.) Links to these documents came from a crowd-funded “investigative journalist” Nafeez Ahmed, publishing on a website called INSURGE intelligence.

One of the documents entitled Human Systems Roadmap is a slide presentation given to the National Defense Industry Association (NDIA) conference last year. The list of agencies involved in that conference and the rest of the documents cited reads like an alphabet soup of military and defense organizations which most of us have never heard of. There are multiple components to the pitch, but one that stands out is “Autonomous Weapons Systems that can take action when needed.” Autonomous weapons are those that are capable of making the kill decision without human intervention. There is also, apparently some focused inquiry into “Social Network Research on New Threats… Text Analytics for Context and Event Prediction…” and “full spectrum social media analysis.” We could get all up in arms about this last feature, but recent incidents in places such as, Benghazi, Egypt, and Turkey had a social networking component that enabled extreme behavior to be quickly mobilized. In most cases, the result was a tragic loss of life. In addition to sharing photos of puppies, social media, it seems, is also good at organizing lynch mobs. We shouldn’t be surprised that governments would want to know how to predict such events in advance. The bigger question is how we should intercede and whether that decision should be made by a human being or a machine.

There are lots of other aspects and lots more documents cited in Ahmed’s lengthy albeit activistic report, but the idea here is that rapidly advancing technology is enabling considerations which were previously held to be science fiction or just impossible. Will we reach the point where these systems are fully operational before we reach the point where we know they are totally safe? It’s a problem when technology grows faster that policy, ethics or meaningful review. And it seems to me that it is always a problem when the race to make something work is more important than the understanding the ramifications if it does.

To be clear, I’m not one of those people who thinks that anything and everything that the military can conceive of is automatically wrong. We will never know how many catastrophes that our national defense services have averted by their vigilance and technological prowess. It should go without saying that the bad guys will get more sophisticated in their methods and tactics, and if we are unable to stay ahead of the game, then we will need to get used to the idea of catastrophe. When push comes to shove, I want the government to be there to protect me. That being said, I’m not convinced that the defense infrastructure (or any part of the tech sector for that matter) is as diligent to anticipate the repercussions of their creations as they are to get them functioning. Only individuals can insist on meaningful review.

Thoughts?

 

Bookmark and Share

Artificial intelligence isn’t really intelligence—yet. I hate to say I told you so.

 

Last week, we discovered that there is a new side to AI. And I don’t mean to gloat, but I saw this potential pitfall as fairly obvious. It is interesting that the real world event that triggered all the talk occurred within days of episode 159 of The Lightstream Chronicles. In my story, Keiji-T, a synthetic police investigator virtually indistinguishable from a human, questions the conclusions of an Artificial Intelligence engine called HAPP-E. The High Accuracy Perpetrator Profiling Engine is designed to assimilate all of the minutiae surrounding a criminal act and spit out a description of the perpetrator. In today’s society, profiling is a human endeavor and is especially useful in identifying difficult-to-catch offenders. Though the procedure is relatively new in the 21st century and goes by many different names, the American Psychological Association says,

“…these tactics share a common goal: to help investigators examine evidence from crime scenes and victim and witness reports to develop an offender description. The description can include psychological variables such as personality traits, psychopathologies and behavior patterns, as well as demographic variables such as age, race or geographic location. Investigators might use profiling to narrow down a field of suspects or figure out how to interrogate a suspect already in custody.”

This type of data is perfect for feeding into an AI, which uses neural networks and predictive algorithms to draw conclusions and recommend decisions. Of course, AI can do it in seconds whereas an FBI unit may take days, months, or even years. The way AI works, as I have reported many times before, is based on tremendous amounts of data. “With the advent of big data, the information going in only amplifies the veracity of the recommendations coming out.” In this way, machines can learn which is the whole idea behind autonomous vehicles making split-second decisions about what to do next based on billions of possibilities and only one right answer.

In my sci-fi episode mentioned above, Detective Guren describes a perpetrator produced by the AI known as HAPP-E . Keiji-T, forever the devil’s advocate, counters with this comment, “Data is just data. Someone who knows how a probability engine works could have adopted the characteristics necessary to produce this deduction.” In other words, if you know what the engine is trying to do, theoretically you could ‘teach’ the AI using false data to produce a false deduction.

Episode 159. It seems fairly obvious.
Episode 159. It seems fairly obvious.

I published Episode 159 on March 18, 2016. Then an interesting thing happened in the tech world. A few days later Microsoft launched an AI chatbot called Tay (a millennial nickname for Taylor) designed to have conversations with — millennials. The idea was that Tay would become as successful as their Chinese version named XiaoIce, which has been around for four years and engages millions of young Chinese in discussions of millennial angst with a chatbot. Tay used three platforms: Twitter, Kik and GroupMe.

Then something went wrong. In less than 24 hours, Tay went from tweeting that “humans are super cool” to full-blown Nazi. Soon after Tay launched, the super-sketchy enclaves of 4chan and 8chan decided to get malicious and manipulate the Tay engine feeding it racist and sexist invective. If you feed an AI enough garbage, it will ‘learn’ that garbage is the norm and begin to repeat it. Before Tay’s first day was over, Microsoft took it down, removed the offensive tweets and apologized.

Taygoeswild
Crazy talk.

Apparently, Microsoft, though it had considered that such a thing was possible, but decided not to use filters (conversations to avoid or canned answers to volatile subjects). Experts in the chatbot field were quick to criticize: “‘You absolutely do NOT let an algorithm mindlessly devour a whole bunch of data that you haven’t vetted even a little bit.’ In other words, Microsoft should have known better than to let Tay loose on the raw uncensored torrent of what Twitter could direct her way.”

The tech site, Arstechnica also probed the question of “…why Tay turned nasty when XiaoIce didn’t?” The assessment thus far is that China’s highly restrictive measures keep social media “ideologically appropriate”, and under control. The censors will close your account for bad behavior.

So, what did we learn from this? AI, at least as it exists today, has no understanding. It has no morals and or ethical behavior unless you give it some. Then next questions are: Who decides what is moral and ethical? Will it be the people (we saw what happened with that) or some other financial or political power? Maybe the problem is with the premise itself. What do you think?

Bookmark and Share

Logical succession, the final installment.

For the past couple of weeks, I have been discussing the idea posited by Ray Kurzweil, that we will have linked our neocortex to the Cloud by 2030. That’s less than 15 years, so I have been asking how that could come to pass with so many technological obstacles in the way. When you make a prediction of that sort, I believe you need a bit more than faith in the exponential curve of “accelerating returns.”

This week I’m not going to take issue with an enormous leap forward in the nanobot technology to accomplish such a feat. Nor am I going to question the vastly complicated tasks of connecting to the neocortex and extracting anything coherent, but also assembling memories, and consciousness and in turn, beaming it to the Cloud. Instead, I’m going to pose the question of, “Why we would want to do this in the first place?”

According to Kurzweil, in a talk last year at Singularity University,

“We’re going to be funnier. We’re going to be sexier. We’re going to be better at expressing loving sentiment…” 1

Another brilliant futurist, and friend of Ray, Peter Diamandis includes these additional benefits:

• Brain to Brain Communication – aka Telepathy
• Instant Knowledge – download anything, complex math, how to fly a plane, or speak another language
• Access More Powerful Computing – through the Cloud
• Tap Into Any Virtual World – no visor, no controls. Your neocortex thinks you are there.
• And more, including and extended immune system, expandable and searchable memories, and “higher-order existence.”2

As Kurzweil explains,

“So as we evolve, we become closer to God. Evolution is a spiritual process. There is beauty and love and creativity and intelligence in the world — it all comes from the neocortex. So we’re going to expand the brain’s neocortex and become more godlike.”1

The future sounds quite remarkable. My issue lies with Koestler’s “ghost in the machine,” or what I call humankind’s uncanny ability to foul things up. Diamandis’ list could easily spin this way:

  • Brain-To-Brain hacking – reading others thoughts
  • Instant Knowledge – to deceive, to steal, to subvert, or hijack.
  • Access to More Powerful Computing – to gain the advantage or any of the previous list.
  • Tap Into Any Virtual World – experience the criminal, the evil, the debauched and not go to jail for it.

You get the idea. Diamandis concludes, “If this future becomes reality, connected humans are going to change everything. We need to discuss the implications in order to make the right decisions now so that we are prepared for the future.”

Nevertheless, we race forward. We discovered this week that “A British researcher has received permission to use a powerful new genome-editing technique on human embryos, even though researchers throughout the world are observing a voluntary moratorium on making changes to DNA that could be passed down to subsequent generations.”3 That would be CrisprCas9.

It was way back in 1968 that Stewart Brand introduced The Whole Earth Catalog with, “We are as gods and might as well get good at it.”

Which lab is working on that?

 

1. http://www.huffingtonpost.com/entry/ray-kurzweil-nanobots-brain-godlike_us_560555a0e4b0af3706dbe1e2
2. http://singularityhub.com/2015/10/12/ray-kurzweils-wildest-prediction-nanobots-will-plug-our-brains-into-the-web-by-the-2030s/
3. http://www.nytimes.com/2016/02/02/health/crispr-gene-editing-human-embryos-kathy-niakan-britain.html?_r=0
Bookmark and Share

Logical succession, Part 2.

Last week the topic was Ray Kurzweil’s prediction that by 2030, not only would we send nanobots into our bloodstream by way of the capillaries, but they would target the neocortex, set up shop, connect to our brains and beam our thoughts and other contents into the Cloud (somewhere). Kurzweil is no crackpot. He is a brilliant scientist, inventor and futurist with an 86 percent accuracy rate on his predictions. Nevertheless, and perhaps presumptuously, I took issue with his prediction, but only because there was an absence of a logical succession. According to Coates,

“…the single most important way in which one comes to an understanding of the future, whether that is working alone, in a team, or drawing on other people… is through plausible reasoning, that is, putting together what you know to create a path leading to one or several new states or conditions, at a distance in time” (Coates 2010, p. 1436).1

Kurzweil’s argument is based heavily on his Law of Accelerating Returns that says (essentially), “We won’t experience 100 years of progress in the 21st century it will be more like 20,000 years of progress (at today’s rate).” The rest, in the absence of more detail, must be based on faith. Faith, perhaps in the fact that we are making considerable progress in architecting nanobots or that we see promising breakthroughs in mind-to-computer communication. But what seems to be missing is the connection part. Not so much connecting to the brain, but beaming the contents somewhere. Another question, why, also comes to mind, but I’ll get to that later.

There is something about all of this technological optimism that furrows my brow. A recent article in WIRED helped me to articulate this skepticism. The rather lengthy article chronicled the story of neurologist Phil Kennedy, who like Kurzweil believes that the day is soon approaching when we will connect or transfer our brains to other things. I can’t help but call to mind what one time Fed manager Alan Greenspan called, “irrational exuberance.” The WIRED article tells of how Kennedy nearly lost his mind by experimenting on himself (including rogue brain surgery in Belize) to implant a host of hardware that would transmit his thoughts. This highly invasive method, the article says is going out of style, but the promise seems to be the same for both scientists: our brains will be infinitely more powerful than they are today.

Writing in WIRED columnist Daniel Engber makes an astute statement. During an interview with Dr. Kennedy, they attempted to watch a DVD of Kennedy’s Belize brain surgery. The DVD player and laptop choked for some reason and after repeated attempts they were able to view Dr. Kennedy’s naked brain undergoing surgery. Reflecting on the mundane struggles with technology that preceded the movie, Engber notes, “It seems like technology always finds new and better ways to disappoint us, even as it grows more advanced every year.”

Dr. Kennedy’s saga was all about getting thoughts into text, or even synthetic speech. Today, the invasive method of sticking electrodes into your cerebral putty has been replaced by a kind of electrode mesh that lays on top of the cortex underneath the skull. They call this less invasive. Researchers have managed to get some results from this, albeit snippets with numerous inaccuracies. They say it will be decades, and one of them points out that even Siri still gets it wrong more than 30 years after the debut of speech recognition technology.
So, then it must be Kurzweil’s exponential law that still provides near-term hope for these scientists. As I often quote Tobias Revell, “Someone somewhere in a lab is playing with your future.”

There remain a few more nagging questions for me. What is so feeble about our brains that we need them to be infinitely more powerful? When is enough, enough? And, what could possibly go wrong with this scenario?

Next week.

 

1. Coates, J.F., 2010. The future of foresight—A US perspective. Technological Forecasting & Social Change 77, 1428–1437.
Bookmark and Share