All posts by lghtstrm

After many years as an award-winning, globally experienced, creative director / designer with mastery of integrated visual design, branding and expert-level command of 2D and 3D visualization, I have completed an advanced degree in Design Development. I am currently Assistant Professor of Design Foundations at The Ohio State University. My research focuses on the provocations of design fiction to better equip us to understand the ramifications of design and its synergistic influence on culture.

How should we talk about the future?

 

Imagine that there are two camps. One camp holds high confidence that the future will be manifestly bright and promising in all aspects of human endeavor. Our health will dramatically improve as we eradicate disease and possibly even death. Artificial Intelligence will be at our beck and call to make our tough decisions, order our lives, fight our wars, watch over us, and keep us safe. Hence, it is full speed ahead. The positives outweigh the negatives. Any missteps will be but a minor hiccup, and we’ll cross those bridges when we come to them.

The second camp believes that many of these promises are achievable. But they also believe that we are beginning to see strong evidence that technology is indeed moving exponentially and that we are at a trajectory point in the curve that where will see what many experts have categorized as impossible or a “long way off” now is knocking at our door.

Kurzweil’s Law of Accelerating Returns, is proving remarkably accurate. Sure we adapted from the horse and buggy to the automobile, and from there to air travel, to an irritatingly resilient nuclear threat, to computers, and smartphones and DNA sequencing. But these changes are arriving more rapidly than their predecessors.

“‘As exponential growth continues to accelerate into the first half of the twenty-first century,’ [Kurzweil] writes. ‘It will appear to explode into infinity, at least from the limited and linear perspective of contemporary humans.’”1

The second camp sees this rapid-fire proliferation as alarming. Not because we will get to utopia faster, but because we will be standing in the midst of a host of disruptive technologies all coming to fruition at the same time without the benefit of meaningful oversight or the engagement of our societies.

I am in the second camp.

Last week, I talked about genetic engineering. The designer-baby question was always pushed aside as a long way off. Not anymore. That’s just one change. Our privacy, in the form of “big data,” from seemingly innocent pastimes such as Facebook, is being severely compromised. According to security technologist Bruce Schneier,

“Facebook can predict race, personality, sexual orientation, political ideology, relationship status, and drug use on the basis of Like clicks alone. The company knows you’re engaged before you announce it, and gay before you come out—and its postings may reveal that to other people without your knowledge or permission. Depending on the country you live in, that could merely be a major personal embarrassment—or it could get you killed.”

Facebook is just one of the seemingly benign things we do every day. By now, most of us consider that using our smartphones 75 percent of our day is also harmless, though we would also have to agree that it has changed us personally, behaviorally, and societally. And while the societal outcry against designer babies has been noticeable since last weeks stories about CrisprCas9 gene splicing with human embryos, how long will it be before we accept it as the norm, and feel pressure in our own families to participate to stay competitive, or maybe even just to be insured.

The fact is that we like to think that we can adapt to anything. To some extent, we pride ourselves on this resilience. Unfortunately, that seems to suggest that we are also powerless to affect these technologies and that we have no say in when, if, or whether we should make them in the first place. Should we be proud of the fact that we are adapting to a complete lack of privacy, to the likelihood of terrorism or being replaced by an AI? These are my questions.

So I am encouraged when others also raise these questions. Recently, the tech media which seems to be perpetually enamored of folks like Mark Zuckerberg and Elon Musk, called Zuckerberg a “bad futurist” because of his over optimistic view of the future.

The article came from the Huffington post’s Rebecca Searles.
According to Searles,

“Elon Musk’s doomsday AI predictions aren’t “irresponsible,” but Mark Zuckerberg’s techno-optimism is.”3

According to a Zuckerberg podcast,

“…people who are arguing for slowing down the process of
building AI, I just find that really questionable… If you’re arguing against AI, then you’re arguing against safer cars that aren’t going to have accidents and you’re arguing against being able to better diagnose people when they’re sick.”3

Technology hawks are always promising safer, and healthier as their rationale for unimpeded acceleration. I’m sure that’s the rah-rah rationale for designer babies, too. Think of all the illnesses we will be able to breed out of the human race. Searles and I agree that negative outcomes deserve equally serious consideration as well, and not after they happen. As she aptly puts it,

“Tackling tech challenges with a build-it-and-see-what-happens approach (a la Zuckerberg’s former “move fast and break things” development mantra) just isn’t suitable for AI.”

The problem is, that Zuckerberg is not alone, nor is last weeks
Shoukhrat Mitalipov. Ultimately, this reality of two camps is the rationale behind my approach to design fiction. As you know, the objective of design fiction is to provoke. Promising utopia is rarely the tinder to fuel a provocation.

Let’s remember Charles Dickens’ story of Ebenezer Scrooge. The ghost of Christmas past takes him back in time where, for the first time, he sees the truth about his past. But this revelation does not change him. Then the ghost of Christmas present opens his eyes to everything around him that he is blind to in the present. Still, Scrooge is unaffected. And finally, the ghost of Christmas future takes him into the future, and it is here that Scrooge sees the days to come as “the way it will be” unless he changes something now.

Somehow, I think the outcome would have been different if that last ghost said, ”Don’t worry. You’ll adapt.”

Let’s not talk about the future in purely utopian terms nor total doom-and-gloom. The future will not be like one or the other any more than is the present day. But let us not be blind to our infinite capacity to foul things up, to the potential of bad actors or the inevitability of unanticipated consequences. If we have any hope of meeting our future with the altruistic image of a utopian society, let us go forward with eyes open.

 

1. http://www.businessinsider.com/ray-kurzweil-law-of-accelerating-returns-2015-5

2. “Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World”

3. http://www.huffingtonpost.com/entry/mark-zuckerberg-is-a-bad-futurist_us_5979295ae4b09982b73761f0

Bookmark and Share

What now?

 

If you follow this blog, you know that I like to say that the rationale behind design fiction—provocations that get us to think about the future—is to ask, “What if?” now so that we don’t have to ask “What now?”, then. This is especially important as our technologies begin to meddle with the primal forces of nature, where we naively anoint ourselves as gods and blithely march forward—because we can.

The CRISPR-Cas9 technology caught my eye almost exactly two years ago from today through a WIRED article by Amy Maxmen. Then I wrote about it, as an awesomely powerful tool for astounding progress for the good of humanity while at the same time taking us down a slippery slope. A Maxmen stated,

“It could, at last, allow genetics researchers to conjure everything anyone has ever worried they would—designer babies, invasive mutants, species-specific bioweapons, and a dozen other apocalyptic sci-fi tropes.”

The article chronicles how, back in 1975, scientists and researchers got together at Asilomar because they saw the handwriting on the wall. They drew up a set of resolutions to make sure that one day the promise of Bioengineering (still a glimmer in their eyes) would not get out of hand.

43 years later, what was only a glimmer was now a reality. So, in 2015, some of these researchers came together again to discuss the implications of a new technique called CRISPR-Cas9. It was just a few years after Jennifer Doudna and Emmanuelle Charpentier figured out the elegant tool for genome editing. Again from Maxmen,

“On June 28, 2012, Doudna’s team published its results in Science. In the paper and an earlier corresponding patent application, they suggest their technology could be a tool for genome engineering. It was elegant and cheap. A grad student could do it.”

In 2015 it was Doudna herself that called the meeting, this time in Napa, to discuss the ethical ramifications of Crispr. Their biggest concern was what they call germline modifications—the stuff that gets passed on from generation to generation, substantially changing the human forever. In September of 2015, Doudna gave a TED Talk asking the asks the scientific community to pause and discuss the ethics of this new tool before rushing in. On the heels of that, the US National Academy of Sciences said it would work on a set of ”recommendations“ for researchers and scientists to follow. No laws, just recommendations.

Fast forward to July 26, 2017. MIT Technology Review reported:

“The first known attempt at creating genetically modified human embryos in the United States has been carried out by a team of researchers in Portland, Oregon… Although none of the embryos were allowed to develop for more than a few days—and there was never any intention of implanting them into a womb—the experiments are a milestone on what may prove to be an inevitable journey toward the birth of the first genetically modified humans.”

MIT’s article was thin on details because the actual paper that delineated the experiment was not yet published. Then, this week, it was. This time it was, indeed, a germline objective.

“…because any genetically modified child would then pass the changes on to subsequent generations via their own germ cells—the egg and sperm.”(ibid).

All this was led by fringe researcher Shoukhrat Mitalipov of Oregon Health and Science University, and WIRED was quick to provide more info, but in two different articles.

The first of these stories appeared last Friday and gave more specifics on Mitalipov than the actual experiment.

“the same guy who first cloned embryonic stem cells in humans. And came up with three-parent in-vitro fertilization. And moved his research on replacing defective mitochondria in human eggs to China when the NIH declined to fund his work. Throughout his career, Mitalipov has gleefully played the role of mad scientist, courting controversy all along the way (sic).”

In the second article, we discover what the mad scientist was trying to do. In essence, Mitalipov demonstrated a highly efficient replacement of mutated genes like MYBPC3, which is responsible for a heart condition called “hypertrophic cardiomyopathy that affects one in 500 people—the most common cause of sudden death among young athletes.” Highly efficient means that in 42 out of 58 attempts, the problem gene was removed and replaced with a normal one. Mitalipov believes that he can get this to 100%. This means that fixing genetic mutations can be done successfully and maybe even become routine in the near future. But WIRED points out that

“would require lengthy clinical trials—something a rider in the current Congressional Appropriations Act has explicitly forbidden the Food and Drug Administration from even considering.”

Ah, but this is not a problem for our fringe mad scientist.

“Mitalipov said he’d have no problem going elsewhere to run the tests, as he did previously with his three-person IVF work.”

Do w see a pattern here? One surprising thing that the study revealed was that,

“Of the 42 successfully corrected embryos, only one of them used the supplied template to make a normal strand of DNA. When Crispr cut out the paternal copy—the mutant one—it left behind a gap, ready to be rebuilt by the cell’s repair machinery. But instead of grabbing the normal template DNA that had been injected with the sperm and Crispr protein, 41 embryos borrowed the normal maternal copy of MYBPC3 to rebuild its gene.”

In other words, the cell said, thanks for your stinking code but we’ll handle this. It appears as though cellular repair may have a mission plan of its own. That’s the mysterious part that reminds us that there is still something miraculous going on here behind the scenes. Mitalipov thinks he and his team can force these arrogant cells to follow instructions.

So what now? With this we have more evidence that guidelines and recommendations, clear heads and cautionary voices are not enough to stop scientists and researchers on the fringe, governments with dubious ethics, or whoever else might want to give things a whirl.

That puts noble efforts like Asilomar in 1975, a similar conference some years ago on nanotechnology, and one earlier this year on Artificial Intelligence as simply that, noble efforts. Why do these conference occur in the first place? Because scientists are genuinely worried that we’re going to extinct ourselves if we aren’t careful. But technology is racing down the autobahn, folks and we can’t expect the people who stand to become billionaires from their discoveries to be the same people policing their actions.

And this is only one of the many transformative technologies that are looming on the horizon. While everyone is squawking about the Paris Accords, why don’t we marshall some of our righteous indignation and pull the world together to agree on some meaningful oversight of these technologies?

We’ve gone from “What if?” to  “What now?” Are we going to avoid, “Oh, shit!”

  1. https://www.wired.com/2015/07/crispr-dna-editing-2/?mbid=nl_72815

2. http://wp.me/p7yvqL-mt

3. https://www.technologyreview.com/s/608350/first-human-embryos-edited-in-us/?set=608342

4. https://www.wired.com/story/scientists-crispr-the-first-human-embryos-in-the-us-maybe/?mbid=social_twitter_onsiteshare

5. https://www.wired.com/story/first-us-crispr-edited-embryos-suggest-superbabies-wont-come-easy/?mbid=nl_8217_p9&CNDID=49614846

Bookmark and Share

What did one AI say to the other AI?

I’e asked this question before, but this is an entirely new answer.

We may  never know.

Based on a plethora of recent media on artificial intelligence (AI), not only are there a lot of people working on it, but many of those on the leading edge are also concerned with what they don’t understand in the midst of their ominous new creation.

Amazon, DeepMind/Google, Facebook, IBM, and Microsoft teamed up to form the Partnership on AI.(1) Their charter talks about sharing information and being responsible. It includes all of the proper buzz words for a technosocial contract.

“…transparency, security and privacy, values and ethics, collaboration between people and AI systems, interoperability of systems, and of the trustworthiness, reliability, containment, safety, and robustness of the technology.”(2)

They are not alone in this concern, as the EU(3) is also working on AI guidelines and a set of rules on robotics.

Some of what makes them all a bit nervous is the way AI learns, the complexity of neural networks and inability to go back and see how the AI arrived at its conclusion. In other words, how do we know that its recommendation is the right one? Adding to that list is the discovery that AIs working together can create their own languages; languages we don’t speak or understand. In one case, at Facebook, researchers saw this happening and stopped it.

For me, it’s a little disconcerting that Facebook, a social media app is one of those corporations leading the charge and leading the research into AI. That’s a broad topic for another blog, but their underlying objective is to market to you. That’s how they make their money.

To be fair, that is at least part of the motivation for Amazon, DeepMind/Google, IBM, and Microsoft, as well. The better they know you, the more stuff they can sell you. Of course, there are also enormous benefits to medical research as well. Such advantages are almost always what these companies talk about first. AI will save your life, cure cancer and prevent crime.

So, it is somewhat encouraging to see that these companies on the forefront of AI breakthroughs are also acutely aware of how AI could go terribly wrong. Hence we see wording from the Partnership on AI, like

“…Seek out, support, celebrate, and highlight aspirational efforts in AI for socially benevolent applications.”

The key word here is benevolent. But the clear objective is to keep the dialog positive, and

“Create and support opportunities for AI researchers and key stakeholders, including people in technology, law, policy, government, civil liberties, and the greater public, to communicate directly and openly with each other about relevant issues to AI and its influences on people and society.”(2)

I’m reading between the lines, but it seems like the issue of how AI will influence people and society is more of an obligatory statement intended to demonstrate compassionate concern. It’s coming from the people who see huge commercial benefit from the public being at ease with the coming onslaught of AI intrusion.

In their long list of goals, the “influences” on society don’t seem to be a priority. For example, should they discover that particular AI has a detrimental effect on people, that their civil liberties are less secure, would they stop? Probably not.

At the rate that these companies are racing toward AI superiority, the unintended consequences for our society are not a high priority. While these groups are making sure that AI does not decide to kill us, I wonder if they are also looking at how the AI will change us and are those changes a good thing?

(1) https://www.fastcodesign.com/90132632/ai-is-inventing-its-own-perfect-languages-should-we-let-it

(2) https://www.partnershiponai.org/#

(3) http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-//EP//NONSGML%2BCOMPARL%2BPE-582.443%2B01%2BDOC%2BPDF%2BV0//EN

Other pertinent links:

https://www.fastcompany.com/3064368/we-dont-always-know-what-ai-is-thinking-and-that-can-be-scary

https://www.fastcodesign.com/90133138/googles-next-design-project-artificial-intelligence

Bookmark and Share

Watching and listening.

 

Pay no attention to Alexa, she’s an AI.

There was a flurry of reports from dozens of news sources (including CNN) last week that an Amazon Echo, (Alexa), called the police during a New Mexico incident of domestic violence. The alleged call began a SWAT standoff, and the victim’s boyfriend was eventually arrested. Interesting story, but after a fact-check, that could not be what happened. Several sources including the New York Times and WIRED debunked the story with details on how Alexa calling 911 is technologically impossible, at least for now. And although the Bernalillo, New Mexico County Sheriff’s Department swears to it, according to WIRED,

“Someone called the police that day. It just wasn’t Alexa..”

Even Amazon agrees from a spokesperson email,

“The receiving end would also need to have an Echo device or the Alexa app connected to Wi-Fi or mobile data, and they would need to have Alexa calling/messaging set up,”1

So it didn’t happen, but most agree, while it may be technologically impossible today, it probably won’t be for very long. The provocative side of the WIRED article proposed this thought:

“The Bernalillo County incident almost certainly had nothing to do with Alexa. But it presents an opportunity to think about issues and abilities that will become real sooner than you might think.”

On the upside, some see benefits from the ability of Alexa to intervene in a domestic dispute that could turn lethal, but they fear something called “false positives.” Could an off handed comment prompt Alexa to make a call to the police? And if it did would you feel as though Alexa had overstepped her bounds?

Others see the potential in suicide prevention. Alexa could calm you down or make suggestions for ways to move beyond the urge to die.

But as we contemplate opening this door, we need to acknowledge that we’re letting these devices listen to us 24/7 and giving them the permission to make decisions on our behalf whether we want them to or not. The WIRED article also included a comment from Evan Selinger of RIT (whom I’ve quoted before).

“Cyberservants will exhibit mission creep over time. They’ll take on more and more functions. And they’ll habituate us to become increasingly comfortable with always-on environments listening to our intimate spaces.”

These technologies start out as warm and fuzzy (see the video below) but as they become part of our lives, they can change us and not always for the good. This idea is something I contemplated a couple of years ago with my Ubiquitous Surveillance future. In this case, the invasion was not as a listening device but with a camera (already part of Amazon’s Echo Look). You can check that out and do your own provocation by visiting the link.

I’m glad that there are people like Susan Liautaud (who I wrote about last week) and Evan Selinger who are thinking about the effects of technology on society, but I still fear most of us take the stance of Dan Reidenberg, who is also quoted in the WIRED piece.

“‘I don’t think we can avoid this. This is where it is going to go. It is really about us adapting to that,” he says.’”

 

Nonsense! That’s like getting in the car with a drunk driver and then doing your best to adapt. Nobody is putting a gun to your head to get into the car. There are decisions to be made here, and they don’t have to be made after the technology has created seemingly insurmountable problems or intrusions in our lives. The companies that make them should be having these discussions now, and we should be invited to share our opinions.

What do you think?

 

  1. http://wccftech.com/alexa-echo-calling-911/
Bookmark and Share

Ethical tech.

Though I tinge most of my blogs with ethical questions, the last time I brought up this topic specifically on this was back in 2015. I guess I am ready to give it another go. Ethics is a tough topic. If we deal with this purely superficially, ethics would seem natural, like common sense, or the right thing to do. But if that’s the case, why do so many people do the wrong thing? Things get even more complicated if we move into institutionally complex issues like banking, or governing, technology, genetics, health care or national defense, just to name a few.

The last time I wrote about this, I highlighted Michael Sandel Professor of Philosophy and Government at Harvard’s Law School, where he teaches a wildly popular course called “Justice.” Then, I was glad to see that the big questions were still being addressed in in places like Harvard. Some of his questions then, which came from a FastCo article, were:

“Is it right to take from the rich and give to the poor? Is it right to legislate personal safety? Can torture ever be justified? Should we try to live forever? Buy our way to the head of the line? Create perfect children?”

These are undoubtedly important and prescient questions to ask, especially as we are beginning to confront technologies that make things which were formerly inconceivable or plain impossible, not only possible but likely.

So I was pleased to see last month, an op-ed piece in WIRED by Susan Liautaud founder of The Ethics Incubator. Susan is about as closely aligned to my tech concerns as anyone I have read. And she brings solid thinking to the issues.

“Technology is approaching the man-machine and man-animal
boundaries. And with this, society may be leaping into humanity defining innovation without the equivalent of a constitutional convention to decide who should have the authority to decide whether, when, and how these innovations are released into society. What are the ethical ramifications? What checks and balances might be important?”

Her comments are right in line with my research and co-research into Humane Technologies. Liataud continues:

“Increasingly, the people and companies with the technological or scientific ability to create new products or innovations are de facto making policy decisions that affect human safety and society. But these decisions are often based on the creator’s intent for the product, and they don’t always take into account its potential risks and unforeseen uses. What if gene-editing is diverted for terrorist ends? What if human-pig chimeras mate? What if citizens prefer to see birds rather than flying cars when they look out a window? (Apparently, this is a real risk. Uber plans to offer flight-hailing apps by 2020.) What if Echo Look leads to mental health issues for teenagers? Who bears responsibility for the consequences?”

For me, the answer to that last question is all of us. We should not rely on business and industry to make these decisions, nor expect our government to do it. We have to become involved in these issues at the public level.

Michael Sandel believes that the public is hungry for these issues, but we tend to shy away from them. They can be confrontational and divisive, and no one wants to make waves or be politically incorrect. That’s a mistake.

An image from the future. A student design fiction project that examined ubiquitous AR.

So while the last thing I want is a politician or CEO making these decisions, these two constituencies could do the responsible thing and create forums for these discussions so that the public can weigh in on them. To do anything less, borders on arrogance.

Ultimately we will have to demand this level of thought, beginning with ourselves. This responsibility should start with anticipatory methodologies that examine the social, cultural and behavioral ramifications, and unintended consequences of what we create.

But we should not fight this alone. Corporations and governments concerned with appearing sensitive and proactive toward the environment and social justice need to add a new pillar to their edifice as responsible global citizens: humane technology.

 

Bookmark and Share

The algorithms.

 

I am not a mathematician. Not even close. My son is a bit of a wiz when it comes to math but not the kind of math you do in your head. His particular mathematical gift only works when he sees the equations. Still, I’d take that. Calculators give me fits. So the idea that I might decipher or write a functioning algorithm (the kind a computer could use) is tantamount to me turning water into wine.

Algorithms are all the buzz these days because they are the functioning math behind artificial intelligence (AI). How is this? I will turn to Merriam-Webster online.

“: a procedure for solving a mathematical problem (as of finding the greatest common divisor) in a finite number of steps that frequently involves repetition of an operation; broadly: a step-by-step procedure for solving a problem or accomplishing some end especially by a computer a search algorithm.”

I’ll throw away the first part of that definition because I don’t understand it. The second part is more my speed: a step-by-step procedure for solving a problem. I get that. As a designer, I do that all the time. Visiting the HowStuffWorks website is even better for explaining the purpose of algorithms. Essentially, it is a way for a computer to do something. Of course, there are, as in most problems, more than one way to get from point A to point B, so computer programmers choose the best algorithm for the task.

What does an algorithm look like? Think of a flow chart or a decision tree. When you turn that into code (the language of computers) then it might look like the image below.

Turning an algorithm into code.

You may already know all this, but I didn’t. Not really. I use the term algorithm all the time to describe the technology and process behind AI, but it always helps me to break these ideas down to their parts.

With all that out of the way, this week on the Futurism.com website, there was an article that discussed Ray Kurzweil’s theory that our brains contain a master algorithm inside our neocortex. It is that algorithm that enables us to handle pattern recognition and all the vastly complex nuance that our brains process every day. Referencing Kurzweil, Futurism stated that,

“… the brain’s neocortex — that part of the brain that’s responsible for intelligent behavior — consists of roughly 300 million modules that recognize patterns. These modules are self-organized into hierarchies that turn simple patterns into complex concepts. Despite neuroscience advancing by leaps and bounds over the years, we still haven’t quite figured out how the neocortex works.”

But, according to Kurzweil, “these multiple modules ‘all have the same algorithm,’”

Presumably, when we figure that out, we will be able to create an AI that thinks like a human, or better than a human. Hold that thought.

On another part of the web was a story from FastCoDesign that asked the question, “What’s The Next Great Art Movement? Ask This Neural Network.” FastCo interviewed Ahmed Elgammal a researcher at Rutgers University who it is getting AI (using algorithms) to create masterpieces after studying all the major art movements through history and how they evolve. His objective is to have the AI come up with the next major art movement. The art is, well, not good art. How do I know? I create art, I’ve studied art, and I’ve even sold art, so I know more about art than I do, say math. The art that Elgammal’s AI generates is intriguing, but it lacks that certain something that tells you it’s art. I think it might be a human thing. It is still something you can recognize.

So if you are still holding on to that earlier thought about algorithms and how we are working to perfect them, we could make the leap that a better functioning AI might fool us at some point and we wouldn’t be able to tell human art from the AI variety. There are a lot of people working on these types of things, and there are billions of dollars going toward the research.

Now I’m going to ask a stupid question. Why do we need an AI to tell us what the next movement in art is or should be? Are humans defective in this area? Couldn’t we just wait and see or are we just too impatient? Perhaps we have grown tired of creating art. If you know, please share.

Not to take anything away from Ray Kurzweil, but I guess I could ask the same question of AI. I assume that we could use AI that is so far above our thinking that it can help us solve problems better than we could on our own. But, if that AI is thinking so far beyond us, I’m not sure whether it would help us create better solutions or whether we would simply abdicate thinking to the AI. There’s a real danger of that you know. Maybe thinking is overrated.

The question keeps coming up. Do we make things to help us flourish or do we make things because we can?

Ray Kurzweil: There’s a Blueprint for the Master Algorithm in Our Brains

 

Bookmark and Share

An example of impending convergence.

 

The IBM Research Alliance and partners have announced this week that they have developed “…an industry-first process to build silicon nanosheet transistors that will enable 5 nanometer (nm) chips – achieving a scale of 30 billion switches on a fingernail-sized chip that will deliver significant power and performance enhancements over today’s state-of-the-art 10nm chips.”

Silicon nanosheet transistors at 5nm

Along with this new development there, of course, come promises that the technology

“…can deliver 40 percent performance enhancement at fixed power, or 75 percent power savings at matched performance. This improvement enables a significant boost to meeting the future demands of artificial intelligence (AI) systems, virtual reality and mobile devices.”

That’s a lot of tech-speech, but essentially it means your computing will happen faster, your devices will be more powerful and use less battery life.

In a previous blog, I discussed the nanometer idea.

“A nanometer is very small. Nanotech concerns itself with creations that exist in the 100nm range and below, roughly 7,500 times smaller than a human hair. In the Moore’s Law race, nanothings are the next frontier in cramming data onto a computer chip, or implanting them into our brains or living cells.”

Right now, IBM and their partners see this new development as a big plus to the future of their cognitive systems. What are cognitive systems?

IBM can answer that:

“Humans are on the cusp of augmenting their lives in extraordinary ways with AI. At IBM Research Labs around the globe, we envision and develop next-generation systems that work side-by-side with humans, accelerating our ability to create, learn, make decisions and think. We also architect the future of Watson, which has evolved from an IBM Research project to the world’s first and most-advanced AI platform.”

So it’s Watson and lots of other AI that may see the biggest benefits as a result of this new tech. With smaller, faster, more efficient chips AI can live a more robust life inside your phone or another device. But thinking phone is probably thinking way too big. Think of something much smaller but just as powerful.

Of course, every new technology comes with promises.

“Whether exploring new technical capabilities, collaborating on ethical practices or applying Watson technology to cancer research, financial decision-making, oil exploration or educational toys, IBM Research is shaping the future of AI.”

It’s all about AI and how we can augment “our lives in extraordinary ways.” Assuming that everyone plays nice, this is another example of technology poised for great things for humankind. Undoubtedly, micro-sized AI can be used for all sorts of nefarious purposes so let’s hope that the “ethical practices” part of their research is getting equal weight.

The question we have yet to ask is whether a faster, smaller, more powerful, all-knowing, steadily accelerating AI is something we truly need. This is a debate worth having. In the meantime, a 5 nm chip breakthrough is an excellent example of how a new, breakthrough technology awaits application by others for a myriad of purposes, advancing them all, in particular ways, by leaps and bounds. Who are these others? And what will they do next?

The right thing to do. Remember that idea?

Bookmark and Share

On utopia and dystopia. Part 2.

From now on we paint only pretty pictures. Get it?

A couple of blarticles (blog-like articles) caught my eye this week. Interestingly, the two blarticles reference the same work. There was a big brew-haha a couple of years ago about how dystopian science fiction and design fiction with dystopian themes were somehow bad for us and that people were getting sick of it. Based on the most recent lists of bestselling books and films, that no longer seems to be the case. Nevertheless, some science fiction writers like Cory Doctorow (a fine author and Hugo winner) think that possibly more utopian futures would be better at influencing public policy. As he wrote in Boing Boing earlier this month,

“Science fiction writers have a long history of intervening/meddling in policy, but historically this has been in the form of right-wing science fiction writers…”

Frankly, I have no idea what this has to do with politics as there must certainly be more left handed authors and filmmakers in Hollywood than their right-sided counterparts. He continues:

“But a new, progressive wing of design fiction practicioners [sic] are increasingly involved in policy questions…”

Doctorow’s article cites a long piece for Slate, by the New America Foundation’s Kevin Bankston. Bankston says,

“…a stellar selection of 64 bestselling sci-fi writers and visionary filmmakers, has tasked itself with imagining realistic, possible, positive futures that we might actually want to live in—and figuring out we can get from here to there.”

That’s great, because, as I said, I am all about making alternative futures legible for people to consider and contemplate. In the process, however, I don’t think we should give dystopia short shrift. The problem with utopias is that they tend to be prescriptive, in other words, ”This is a better future because I say so.”

The futures I conjure up are neither utopian nor dystopian, but I do try to surface real concerns so that people can decide for themselves, kind of like a democracy. History has proven that regardless of our utopian ideals we more often than not mess things up. I don’t want it to be progressive, liberal, conservative or right wing, and I don’t think it should be the objective of science fiction or entertainment to help shape these policies especially when there is an obvious political purpose. It’s one thing to make alternative futures legible, another to shove them at us.

As long as it’s fiction and entertaining utopias are great but let’s not kid ourselves. Utopia and to some extent dystopia are individual perspectives. Frankly, I don’t want someone telling me that one future is better for me than another. In fact, that almost borders on dystopia in my thinking.

I’m not sure whether Bruce Sterling was answering Cory Doctorow’s piece, but Sterling’s stance on the issue is sharper and more insightful. Sterling is acutely aware that today is the focus. We look at futures, and we realize there are steps we need to take today to make tomorrow better. I recommend his post. Here are a couple of choice clips:

“*The “better future” thing is jam-tomorrow and jam-yesterday talk, so it tends to become the enemy of jam today. You’re better off reading history, and realizing that public aspirations that do seem great, and that even meet with tremendous innovative success, can change the tenor of society and easily become curses a generation later. Not because they were ever bad ideas or bad things to aspire to or do, but because that’s the nature of historical causality. Tomorrow composts today.”

“*If you like doing incredible things, because you’re of a science fictional temperament, then you should frankly admit your fondness for the way-out and the wondrous, and not disingenuously pretend that it’s somehow bound to improve the lot of the mundanes.”

Prettier pictures are not going to save us. Most of the world needs a wake-up call, not another dream.

In my humble opinion.

 

How science fiction writers’ “design fiction” is playing a greater role in policy debates

Various sci-fi projects allegedly creating a better future

Bookmark and Share

On utopia and dystopia. Part 1.

A couple of interesting articles cropped up in that past week or so coming out of the WIRED Business Conference. The first was an interview with Jennifer Doudna, a pioneer of Crispr/Cas9 the gene editing technique that makes editing DNA nearly as simple as splicing a movie together. That is if you’re a geneticist. According to the interview, most of this technology is at use in crop design, for things like longer lasting potatoes or wheat that doesn’t mildew. But Doudna knows that this is a potential Pandora’s Box.

“In 2015, Doudna was part of a broad coalition of leading biologists who agreed to a worldwide moratorium on gene editing to the “germ line,” which is to say, edits that get passed along to subsequent generations. But it’s legally non-binding, and scientists in China have already begun experiments that involve editing the genome of human embryos.”

Crispr May Cure All Genetic Disease—One Day

Super-babies are just one of the potential ways to misuse Crispr. I blogged a longer and more diabolical list a couple of years ago.

Meddling with the primal forces of nature.

In Doudna’s recent interview, though she focused on the more positive effects on farming, things like rice and tomatoes.

You may not immediately see the connection, but there was a related story from the same conference where WIRED interviewed Jonathan Nolan and Lisa Joy co-creators of the HBO series Westworld. If you haven’t seen Westworld, I recommend it if only for Anthony Hopkins’ performance. As far as I’m concerned Anthony Hopkins could read the phone book, and I would be spellbound.

At any rate, the article quotes:

“The first season of Westworld wasted no time in going from “hey cool, robots!” to “well, that was bleak.” Death, destruction, android torture—it’s all been there from the pilot onward.”

Which pretty much sums it up. According to Nolan,
“We’re inventing cautionary tales for ourselves…”

“And Joy sees Westworld, and sci-fi in general, as an opportunity to talk about what humanity could or should do if things start to go wrong, especially now that advancements in artificial intelligence technologies are making things like androids seem far more plausible than before. “We’re leaping into the age of the unfathomable, the time when machines [can do things we can’t],”

Joy said.

Westworld’s Creators Know Why Sci-Fi Is So Dystopian

To me, this sounds familiar. It is the essence of my particular brand of design fiction. I don’t always set out to make it dystopian but if we look at the way things seem to naturally evolve, virtually every technology once envisioned as a benefit to humankind ends up with someone misusing it. To look at any potentially transformative tech and not ask, “Transform into what?” is simply irresponsible. We love to sell our ideas on their promise of curing disease, saving lives, and ending suffering, but the technologies that we are designing today have epic downsides that many technologists do not even understand. Misuse happens so often that I’ve begun to see us a reckless if we don’t anticipate these repercussions in advance. It’s the subject of a new paper that I’m working toward.

In the meantime, it’s important that we pay attention and demand that others do, too.

There’s more from the science fiction world on utopias vs. dystopias, and I’ll cover that next week.

 

 

Bookmark and Share

An AI as President?

 

Back on May 19th, before I went on holiday, I promised to comment on an article that appeared that week advocating that we would better off with artificial intelligence (AI) as President of the United States. Joshua Davis authored the piece: Hear me out: Let’s Elect
An AI As President, for the business section of WIRED  online. Let’s start out with a few quotes.

“An artificially intelligent president could be trained to
maximize happiness for the most people without infringing on civil liberties.”

“Within a decade, tens of thousands of people will entrust their daily commute—and their safety—to an algorithm, and they’ll do it happily…The increase in human productivity and happiness will be enormous.”

Let’s start with the word happiness. What is that anyway? I’ve seen it around in several discourses about the future, that somehow we have to start focusing on human happiness above all things, but what makes me happy and what makes you happy may very well be different things. Then there is the frightening idea that it is the job of government to make us happy! There are a lot of folks out there that the government should give us a guaranteed income, pay for our healthcare, and now, apparently, it should also make us happy. If you haven’t noticed from my previous blogs, I am not a progressive. If you believe that government should undertake the happy challenge, you had better hope that their idea of happiness coincides with your own. Gerd Leonhard, a futurist whose work I respect, says that there are two types of happiness: first is hedonic (pleasure) which tends to be temporary, and the other is a eudaimonic happiness which he defines as human flourishing.1 I prefer the latter as it is likely to be more meaningful. Meaning is rather crucial to well-being and purpose in life. I believe that we should be responsible for our happiness. God help us if we leave it up to a machine.

This brings me to my next issue with this insane idea. Davis suggests that by simply not driving, there will be an enormous increase in human productivity and happiness. According to the website overflow data,

“Of the 139,786,639 working individuals in the US, 7,000,722, or about 5.01%, use public transit to get to work according to the 2013 American Communities Survey.”

Are those 7 million working individuals who don’t drive happier and more productive? The survey should have asked, but I’m betting the answer is no. Davis also assumes that everyone will be able to afford an autonomous vehicle. Maybe providing every American with an autonomous vehicle is also the job of the government.

Where I agree with Davis is that we will probably abdicate our daily commute to an algorithm and do it happily. Maybe this is the most disturbing part of his argument. As I am fond of saying, we are sponges for technology, and we often adopt new technology without so much as a thought toward the broader ramifications of what it means to our humanity.

There are sober people out there advocating that we must start to abdicate our decision-making to algorithms because we have too many decisions to make. They are concerned that the current state of affairs is simply too painful for humankind. If you dig into the rationale that these experts are using, many of them are motivated by commerce. Already Google and Facebook and the algorithms of a dozen different apps are telling you what you should buy, where you should eat, who you should “friend” and, in some cases, what you should think. They give you news (real or fake), and they tell you this is what will make you happy. Is it working? Agendas are everywhere, but very few of them have you in the center.

As part of his rationale, Davis cites the proven ability for AI to beat the world’s Go champions over and over and over again, and that it can find melanomas better than board-certified dermatologists.

“It won’t be long before an AI is sophisticated enough to
implement a core set of beliefs in ways that reflect changes in the world. In other words, the time is coming when AIs will have better judgment than most politicians.”

That seems like grounds to elect one as President, right? In fact, it is just another way for us to take our eye off the ball, to subordinate our autonomy to more powerful forces in the belief that technology will save us and make us happier.

Back to my previous point, that’s what is so frightening. It is precisely the kind of argument that people buy into. What if the new AI President decides that we will all be happier if we’re sedated, and then using executive powers makes it law? Forget checks and balances, since who else in government could win an argument against an all-knowing AI? How much power will the new AI President give to other algorithms, bots, and machines?

If we are willing to give up the process of purposeful work to make a living wage in exchange for a guaranteed income, to subordinate our decision-making to have “less to think about,” to abandon reality for a “good enough” simulation, and believe that this new AI will be free of the special interests who think they control it, then get ready for the future.

1. Leonhard, Gerd. Technology vs. Humanity: The Coming Clash between Man and Machine. p112, United Kingdom: Fast Future, 2016. Print.

Bookmark and Share