Tag Archives: Jennifer Doudna

Corporate Sci-Fi.

Note: Also published on LinkedIn

 

Why your company needs to play in the future.

As a professor of design and a design fiction researcher, I write academic papers and blog weekly about the future. I teach about the future of design, and I create future scenarios, sometimes with my students, that provoke us to look at what we are doing, what we are making, why we are making it and the ramifications that are inevitable. Primarily I try to focus both designers and decision makers on the steps they can take today to keep from being blindsided tomorrow. Futurists seem to be all the rage these days telling us to prepare for the Singularity, autonomous everything, or that robots will take our jobs. Recently, Jennifer Doudna, co-inventor of the gene editing technique called CrisprCas9 has been making the rounds and sounding the alarm that technology is moving so fast that we aren’t going to be able to contain a host of unforeseen (and foreseen) circumstances inside Pandora’s box. This concern should be prevalent, however, beyond just the bioengineering fields and extend into virtually anywhere that technology is racing forward fueled by venture capital and the desperate need to stay on top of whatever space in which we are playing. There is a lot at stake. Technology has already redefined privacy, behavioral wellness, personal autonomy, healthcare, labor, and maybe even our humanness, just to name a few.

Several recent articles have highlighted the changing world of design and how the pressure is on designers to make user adoption more like user addiction to ensure the success of a product or app. The world of behavioral economics is becoming a new arena in which we are using algorithms to manipulate users. Some designers are passing the buck to the clients or corporations that employ them for the questionable ethics of addictive products; others feel compelled to step aside and work on less lucrative projects or apply their skills to social causes. Most really care and want to help. But designers are uniquely positioned and trained to tackle these wicked problems—if we would collaborate with them.

Beyond the companies that might be deliberately trying to manipulate us, are those that unknowingly, or at least unintentionally, transform our behaviors in ways that are potentially harmful. Traditionally, we seek to hold someone responsible when a product or service is faulty, the physician for malpractice, the designer or manufacturer when a toy causes injury, a garment falls apart, or an appliance self-destructs. But as we move toward systemic designs that are less physical and more emotional, behavioral, or biological, design faults may not be so easy to identify and their repercussions noticeable only after serious issues have arisen. In fact, we launch many of the apps and operating systems used today with admitted errors and bugs. Designers rely on real-life testing to identify problems, issue patches, revisions, and versions.

In the realm of nanotechnology, while scientists and thought leaders have proposed guidelines and best-practices, research and development teams in labs around the world race forward without regulation creating molecule-sized structures, machines, and substances with no idea whether they are safe or what might be long-term effects of exposure to these elements. In biotechnology, while folks like Jennifer Doudna appeal to a morally ethical cadre of researchers to tread carefully in the realm of genetic engineering (especially when it comes to inheritable gene manipulation) we do not universally share those morals and ethics. Recent headlines attest to the fact that some scientists are bent on moving forward regardless of the implications.

Some technologies such as our smartphones have become equally invasive technology, yet they are now considered mundane. In just ten years since the introduction of the iPhone, we have transformed behaviors, upended our modes of communication, redefined privacy, distracted our attentions, distorted reality and manipulated a predicted 2.3 billion users as of 2017. [1] It is worth contemplating that this disruption is not from a faulty product, but rather one that can only be considered wildly successful.

There are a plethora of additional technologies that are poised to refine our worlds yet again including artificial intelligence, ubiquitous surveillance, human augmentation, robotics, virtual, augmented and mixed reality and the pervasive Internet of Things. Many of these technologies make their way into our experiences through the promise of better living, medical breakthroughs, or a safer and more secure life. But too often we ignore the potential downsides, the unintended consequences, or the systemic ripple-effects that these technologies spawn. Why?

In many cases, we do not want to stand in the way of progress. In others, we believe that the benefits outweigh the disadvantages, yet this is the same thinking that has spawned some of our most complex and daunting systems, from nuclear weapons to air travel and the internal combustion engine. Each of these began with the best of intentions and, in many ways were as successful and initially beneficial as they could be. At the same time, they advanced and proliferated far more rapidly than we were prepared to accommodate. Dirty bombs are a reality we did not expect. The alluring efficiency with which we can fly from one city to another has nevertheless spawned a gnarly network of air traffic, baggage logistics, and anti-terrorism measures that are arguably more elaborate than getting an aircraft off the ground. Traffic, freeways, infrastructure, safety, and the drain on natural resources are complexities never imagined with the revolution of personal transportation. We didn’t see the entailments of success.

This is not always true. There have often been scientists and thought leaders who were waving the yellow flag of caution. I have written about how, “back in 1975, scientists and researchers got together at Asilomar because they saw the handwriting on the wall. They drew up a set of resolutions to make sure that one day the promise of Bioengineering (still a glimmer in their eyes) would not get out of hand.”[2] Indeed, researchers like Jennifer Doudna continue to carry the banner. A similar conference took place earlier this year to alert us to the potential dangers of technology and earlier this year another to put forth recommendations and guidelines to ensure that when machines are smarter than we are they carry on in a beneficent role. Too often, however, it is the scientists and visionaries who attend these conferences. [3] Noticeably absent, though not always, is corporate leadership.

Nevertheless, in this country, there remains no safeguarding regulation for nanotech, nor bioengineering, nor AI research. It is a free-for-all, and all of which could have massive disruption not only to our lifestyles but also our culture, our behavior, and our humanness. Who is responsible?

For nearly 40 years there has been an environmental movement that has spread globally. Good stewardship is a good idea. But it wasn’t until most corporations saw a way for it to make economic sense that they began to focus on it and then promote it as their contribution to society, their responsibility, and their civic duty. As well intentioned as they may be (and many are) much more are not paying attention to the effect of their technological achievements on our human condition.

We design most technologies with a combination of perceived user need and commercial potential. In many cases, these are coupled with more altruistic motivations such as a “do no harm” commitment to the environment and fair labor practices. As we move toward the capability to change ourselves in fundamental ways, are we also giving significant thought to the behaviors that we will engender by such innovations, or the resulting implications for society, culture, and the interconnectedness of everything?

Enter Humane Technology

Ultimately we will have to demand this level of thought, beginning with ourselves. But we should not fight this alone. Corporations concerned with appearing sensitive and proactive toward the environment and social justice need to add a new pillar to their edifice as responsible global citizens: humane technology.

Humane technology considers the socio-behavioral ramifications of products and services: digital dependencies, and addictions, job loss, genetic repercussions, the human impact from nanotechnologies, AI, and the Internet of Things.

To whom do we turn when a 14-year-old becomes addicted to her smartphone or obsessed with her social media popularity? We could condemn the parents for lack of supervision, but many of them are equally distracted. Who is responsible for the misuse of a drone to vandalize property or fire a gun or the anticipated 1 billion drones flying around by 2030? [4] Who will answer for the repercussions of artificial intelligence that spouts hate speech? Where will the buck stop when genetic profiling becomes a requirement for getting insured or getting a job?

While the backlash against these types of unintended consequences or unforeseen circumstances are not yet widespread and citizens have not taken to the streets in mass protests, behavioral and social changes like these may be imminent as a result of dozens of transformational technologies currently under development in labs and R&D departments across the globe. Who is looking at the unforeseen or the unintended? Who is paying attention and who is turning a blind eye?

It was possible to have anticipated texting and driving. It is possible to anticipate a host of horrific side effects from nanotechnology to both humans and the environment. It’s possible to tag the ever-present bad actor to any number of new technologies. It is possible to identify when the race to master artificial intelligence may be coming at the expense of making it safe or drawing the line. In fact, it is a marketing opportunity for corporate interests to take the lead and the leverage their efforts to preempt adverse side effects as a distinctive advantage.

Emphasizing humane technology is an automatic benefit for an ethical company, and for those more concerned with profit than ethics, (just between you and me) it offers the opportunity for a better brand image and (at least) the appearance of social concern. Whatever the motivation, we are looking at a future where we are either prepared for what happens next, or we are caught napping.

This responsibility should start with anticipatory methodologies that examine the social, cultural and behavioral ramifications, and unintended consequences of what we create. Designers and those trained in design research are excellent collaborators. My brand of design fiction is intended to take us into the future in an immersive and visceral way to provoke the necessary discussion and debate that anticipate the storm should there be one, but promising utopia is rarely the tinder to fuel a provocation. Design fiction embraces the art critical thinking and thought problems as a means of anticipating conflict and complexity before these become problems to be solved.

Ultimately we have to depart from the idea that technology will be the magic pill to solve the ills of humanity, design fiction, and other anticipatory methodologies can help to acknowledge our humanness and our propensity to foul things up. If we do not self-regulate, regulation will inevitably follow, probably spurred on by some unspeakable tragedy. There is an opportunity, now for the corporation to step up to the future with a responsible, thoughtful compassion for our humanity.

 

 

1. https://www.statista.com/statistics/330695/number-of-smartphone-users-worldwide/

2. http://theenvisionist.com/2017/08/04/now-2/

3. http://theenvisionist.com/2017/03/24/genius-panel-concerned/

4. http://www.abc.net.au/news/2017-08-31/world-of-drones-congress-brisbane-futurist-thomas-frey/8859008

Bookmark and Share

What now?

 

If you follow this blog, you know that I like to say that the rationale behind design fiction—provocations that get us to think about the future—is to ask, “What if?” now so that we don’t have to ask “What now?”, then. This is especially important as our technologies begin to meddle with the primal forces of nature, where we naively anoint ourselves as gods and blithely march forward—because we can.

The CRISPR-Cas9 technology caught my eye almost exactly two years ago from today through a WIRED article by Amy Maxmen. Then I wrote about it, as an awesomely powerful tool for astounding progress for the good of humanity while at the same time taking us down a slippery slope. A Maxmen stated,

“It could, at last, allow genetics researchers to conjure everything anyone has ever worried they would—designer babies, invasive mutants, species-specific bioweapons, and a dozen other apocalyptic sci-fi tropes.”

The article chronicles how, back in 1975, scientists and researchers got together at Asilomar because they saw the handwriting on the wall. They drew up a set of resolutions to make sure that one day the promise of Bioengineering (still a glimmer in their eyes) would not get out of hand.

43 years later, what was only a glimmer was now a reality. So, in 2015, some of these researchers came together again to discuss the implications of a new technique called CRISPR-Cas9. It was just a few years after Jennifer Doudna and Emmanuelle Charpentier figured out the elegant tool for genome editing. Again from Maxmen,

“On June 28, 2012, Doudna’s team published its results in Science. In the paper and an earlier corresponding patent application, they suggest their technology could be a tool for genome engineering. It was elegant and cheap. A grad student could do it.”

In 2015 it was Doudna herself that called the meeting, this time in Napa, to discuss the ethical ramifications of Crispr. Their biggest concern was what they call germline modifications—the stuff that gets passed on from generation to generation, substantially changing the human forever. In September of 2015, Doudna gave a TED Talk asking the asks the scientific community to pause and discuss the ethics of this new tool before rushing in. On the heels of that, the US National Academy of Sciences said it would work on a set of ”recommendations“ for researchers and scientists to follow. No laws, just recommendations.

Fast forward to July 26, 2017. MIT Technology Review reported:

“The first known attempt at creating genetically modified human embryos in the United States has been carried out by a team of researchers in Portland, Oregon… Although none of the embryos were allowed to develop for more than a few days—and there was never any intention of implanting them into a womb—the experiments are a milestone on what may prove to be an inevitable journey toward the birth of the first genetically modified humans.”

MIT’s article was thin on details because the actual paper that delineated the experiment was not yet published. Then, this week, it was. This time it was, indeed, a germline objective.

“…because any genetically modified child would then pass the changes on to subsequent generations via their own germ cells—the egg and sperm.”(ibid).

All this was led by fringe researcher Shoukhrat Mitalipov of Oregon Health and Science University, and WIRED was quick to provide more info, but in two different articles.

The first of these stories appeared last Friday and gave more specifics on Mitalipov than the actual experiment.

“the same guy who first cloned embryonic stem cells in humans. And came up with three-parent in-vitro fertilization. And moved his research on replacing defective mitochondria in human eggs to China when the NIH declined to fund his work. Throughout his career, Mitalipov has gleefully played the role of mad scientist, courting controversy all along the way (sic).”

In the second article, we discover what the mad scientist was trying to do. In essence, Mitalipov demonstrated a highly efficient replacement of mutated genes like MYBPC3, which is responsible for a heart condition called “hypertrophic cardiomyopathy that affects one in 500 people—the most common cause of sudden death among young athletes.” Highly efficient means that in 42 out of 58 attempts, the problem gene was removed and replaced with a normal one. Mitalipov believes that he can get this to 100%. This means that fixing genetic mutations can be done successfully and maybe even become routine in the near future. But WIRED points out that

“would require lengthy clinical trials—something a rider in the current Congressional Appropriations Act has explicitly forbidden the Food and Drug Administration from even considering.”

Ah, but this is not a problem for our fringe mad scientist.

“Mitalipov said he’d have no problem going elsewhere to run the tests, as he did previously with his three-person IVF work.”

Do w see a pattern here? One surprising thing that the study revealed was that,

“Of the 42 successfully corrected embryos, only one of them used the supplied template to make a normal strand of DNA. When Crispr cut out the paternal copy—the mutant one—it left behind a gap, ready to be rebuilt by the cell’s repair machinery. But instead of grabbing the normal template DNA that had been injected with the sperm and Crispr protein, 41 embryos borrowed the normal maternal copy of MYBPC3 to rebuild its gene.”

In other words, the cell said, thanks for your stinking code but we’ll handle this. It appears as though cellular repair may have a mission plan of its own. That’s the mysterious part that reminds us that there is still something miraculous going on here behind the scenes. Mitalipov thinks he and his team can force these arrogant cells to follow instructions.

So what now? With this we have more evidence that guidelines and recommendations, clear heads and cautionary voices are not enough to stop scientists and researchers on the fringe, governments with dubious ethics, or whoever else might want to give things a whirl.

That puts noble efforts like Asilomar in 1975, a similar conference some years ago on nanotechnology, and one earlier this year on Artificial Intelligence as simply that, noble efforts. Why do these conference occur in the first place? Because scientists are genuinely worried that we’re going to extinct ourselves if we aren’t careful. But technology is racing down the autobahn, folks and we can’t expect the people who stand to become billionaires from their discoveries to be the same people policing their actions.

And this is only one of the many transformative technologies that are looming on the horizon. While everyone is squawking about the Paris Accords, why don’t we marshall some of our righteous indignation and pull the world together to agree on some meaningful oversight of these technologies?

We’ve gone from “What if?” to  “What now?” Are we going to avoid, “Oh, shit!”

  1. https://www.wired.com/2015/07/crispr-dna-editing-2/?mbid=nl_72815

2. http://wp.me/p7yvqL-mt

3. https://www.technologyreview.com/s/608350/first-human-embryos-edited-in-us/?set=608342

4. https://www.wired.com/story/scientists-crispr-the-first-human-embryos-in-the-us-maybe/?mbid=social_twitter_onsiteshare

5. https://www.wired.com/story/first-us-crispr-edited-embryos-suggest-superbabies-wont-come-easy/?mbid=nl_8217_p9&CNDID=49614846

Bookmark and Share

On utopia and dystopia. Part 1.

A couple of interesting articles cropped up in that past week or so coming out of the WIRED Business Conference. The first was an interview with Jennifer Doudna, a pioneer of Crispr/Cas9 the gene editing technique that makes editing DNA nearly as simple as splicing a movie together. That is if you’re a geneticist. According to the interview, most of this technology is at use in crop design, for things like longer lasting potatoes or wheat that doesn’t mildew. But Doudna knows that this is a potential Pandora’s Box.

“In 2015, Doudna was part of a broad coalition of leading biologists who agreed to a worldwide moratorium on gene editing to the “germ line,” which is to say, edits that get passed along to subsequent generations. But it’s legally non-binding, and scientists in China have already begun experiments that involve editing the genome of human embryos.”

Crispr May Cure All Genetic Disease—One Day

Super-babies are just one of the potential ways to misuse Crispr. I blogged a longer and more diabolical list a couple of years ago.

Meddling with the primal forces of nature.

In Doudna’s recent interview, though she focused on the more positive effects on farming, things like rice and tomatoes.

You may not immediately see the connection, but there was a related story from the same conference where WIRED interviewed Jonathan Nolan and Lisa Joy co-creators of the HBO series Westworld. If you haven’t seen Westworld, I recommend it if only for Anthony Hopkins’ performance. As far as I’m concerned Anthony Hopkins could read the phone book, and I would be spellbound.

At any rate, the article quotes:

“The first season of Westworld wasted no time in going from “hey cool, robots!” to “well, that was bleak.” Death, destruction, android torture—it’s all been there from the pilot onward.”

Which pretty much sums it up. According to Nolan,
“We’re inventing cautionary tales for ourselves…”

“And Joy sees Westworld, and sci-fi in general, as an opportunity to talk about what humanity could or should do if things start to go wrong, especially now that advancements in artificial intelligence technologies are making things like androids seem far more plausible than before. “We’re leaping into the age of the unfathomable, the time when machines [can do things we can’t],”

Joy said.

Westworld’s Creators Know Why Sci-Fi Is So Dystopian

To me, this sounds familiar. It is the essence of my particular brand of design fiction. I don’t always set out to make it dystopian but if we look at the way things seem to naturally evolve, virtually every technology once envisioned as a benefit to humankind ends up with someone misusing it. To look at any potentially transformative tech and not ask, “Transform into what?” is simply irresponsible. We love to sell our ideas on their promise of curing disease, saving lives, and ending suffering, but the technologies that we are designing today have epic downsides that many technologists do not even understand. Misuse happens so often that I’ve begun to see us a reckless if we don’t anticipate these repercussions in advance. It’s the subject of a new paper that I’m working toward.

In the meantime, it’s important that we pay attention and demand that others do, too.

There’s more from the science fiction world on utopias vs. dystopias, and I’ll cover that next week.

 

 

Bookmark and Share