Tag Archives: predictive algorithms

How should we talk about the future?


Imagine that there are two camps. One camp holds high confidence that the future will be manifestly bright and promising in all aspects of human endeavor. Our health will dramatically improve as we eradicate disease and possibly even death. Artificial Intelligence will be at our beck and call to make our tough decisions, order our lives, fight our wars, watch over us, and keep us safe. Hence, it is full speed ahead. The positives outweigh the negatives. Any missteps will be but a minor hiccup, and we’ll cross those bridges when we come to them.

The second camp believes that many of these promises are achievable. But they also believe that we are beginning to see strong evidence that technology is indeed moving exponentially and that we are at a trajectory point in the curve that where will see what many experts have categorized as impossible or a “long way off” now is knocking at our door.

Kurzweil’s Law of Accelerating Returns, is proving remarkably accurate. Sure we adapted from the horse and buggy to the automobile, and from there to air travel, to an irritatingly resilient nuclear threat, to computers, and smartphones and DNA sequencing. But these changes are arriving more rapidly than their predecessors.

“‘As exponential growth continues to accelerate into the first half of the twenty-first century,’ [Kurzweil] writes. ‘It will appear to explode into infinity, at least from the limited and linear perspective of contemporary humans.’”1

The second camp sees this rapid-fire proliferation as alarming. Not because we will get to utopia faster, but because we will be standing in the midst of a host of disruptive technologies all coming to fruition at the same time without the benefit of meaningful oversight or the engagement of our societies.

I am in the second camp.

Last week, I talked about genetic engineering. The designer-baby question was always pushed aside as a long way off. Not anymore. That’s just one change. Our privacy, in the form of “big data,” from seemingly innocent pastimes such as Facebook, is being severely compromised. According to security technologist Bruce Schneier,

“Facebook can predict race, personality, sexual orientation, political ideology, relationship status, and drug use on the basis of Like clicks alone. The company knows you’re engaged before you announce it, and gay before you come out—and its postings may reveal that to other people without your knowledge or permission. Depending on the country you live in, that could merely be a major personal embarrassment—or it could get you killed.”

Facebook is just one of the seemingly benign things we do every day. By now, most of us consider that using our smartphones 75 percent of our day is also harmless, though we would also have to agree that it has changed us personally, behaviorally, and societally. And while the societal outcry against designer babies has been noticeable since last weeks stories about CrisprCas9 gene splicing with human embryos, how long will it be before we accept it as the norm, and feel pressure in our own families to participate to stay competitive, or maybe even just to be insured.

The fact is that we like to think that we can adapt to anything. To some extent, we pride ourselves on this resilience. Unfortunately, that seems to suggest that we are also powerless to affect these technologies and that we have no say in when, if, or whether we should make them in the first place. Should we be proud of the fact that we are adapting to a complete lack of privacy, to the likelihood of terrorism or being replaced by an AI? These are my questions.

So I am encouraged when others also raise these questions. Recently, the tech media which seems to be perpetually enamored of folks like Mark Zuckerberg and Elon Musk, called Zuckerberg a “bad futurist” because of his over optimistic view of the future.

The article came from the Huffington post’s Rebecca Searles.
According to Searles,

“Elon Musk’s doomsday AI predictions aren’t “irresponsible,” but Mark Zuckerberg’s techno-optimism is.”3

According to a Zuckerberg podcast,

“…people who are arguing for slowing down the process of
building AI, I just find that really questionable… If you’re arguing against AI, then you’re arguing against safer cars that aren’t going to have accidents and you’re arguing against being able to better diagnose people when they’re sick.”3

Technology hawks are always promising safer, and healthier as their rationale for unimpeded acceleration. I’m sure that’s the rah-rah rationale for designer babies, too. Think of all the illnesses we will be able to breed out of the human race. Searles and I agree that negative outcomes deserve equally serious consideration as well, and not after they happen. As she aptly puts it,

“Tackling tech challenges with a build-it-and-see-what-happens approach (a la Zuckerberg’s former “move fast and break things” development mantra) just isn’t suitable for AI.”

The problem is, that Zuckerberg is not alone, nor is last weeks
Shoukhrat Mitalipov. Ultimately, this reality of two camps is the rationale behind my approach to design fiction. As you know, the objective of design fiction is to provoke. Promising utopia is rarely the tinder to fuel a provocation.

Let’s remember Charles Dickens’ story of Ebenezer Scrooge. The ghost of Christmas past takes him back in time where, for the first time, he sees the truth about his past. But this revelation does not change him. Then the ghost of Christmas present opens his eyes to everything around him that he is blind to in the present. Still, Scrooge is unaffected. And finally, the ghost of Christmas future takes him into the future, and it is here that Scrooge sees the days to come as “the way it will be” unless he changes something now.

Somehow, I think the outcome would have been different if that last ghost said, ”Don’t worry. You’ll adapt.”

Let’s not talk about the future in purely utopian terms nor total doom-and-gloom. The future will not be like one or the other any more than is the present day. But let us not be blind to our infinite capacity to foul things up, to the potential of bad actors or the inevitability of unanticipated consequences. If we have any hope of meeting our future with the altruistic image of a utopian society, let us go forward with eyes open.


1. http://www.businessinsider.com/ray-kurzweil-law-of-accelerating-returns-2015-5

2. “Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World”

3. http://www.huffingtonpost.com/entry/mark-zuckerberg-is-a-bad-futurist_us_5979295ae4b09982b73761f0

Bookmark and Share

What now?


If you follow this blog, you know that I like to say that the rationale behind design fiction—provocations that get us to think about the future—is to ask, “What if?” now so that we don’t have to ask “What now?”, then. This is especially important as our technologies begin to meddle with the primal forces of nature, where we naively anoint ourselves as gods and blithely march forward—because we can.

The CRISPR-Cas9 technology caught my eye almost exactly two years ago from today through a WIRED article by Amy Maxmen. Then I wrote about it, as an awesomely powerful tool for astounding progress for the good of humanity while at the same time taking us down a slippery slope. A Maxmen stated,

“It could, at last, allow genetics researchers to conjure everything anyone has ever worried they would—designer babies, invasive mutants, species-specific bioweapons, and a dozen other apocalyptic sci-fi tropes.”

The article chronicles how, back in 1975, scientists and researchers got together at Asilomar because they saw the handwriting on the wall. They drew up a set of resolutions to make sure that one day the promise of Bioengineering (still a glimmer in their eyes) would not get out of hand.

43 years later, what was only a glimmer was now a reality. So, in 2015, some of these researchers came together again to discuss the implications of a new technique called CRISPR-Cas9. It was just a few years after Jennifer Doudna and Emmanuelle Charpentier figured out the elegant tool for genome editing. Again from Maxmen,

“On June 28, 2012, Doudna’s team published its results in Science. In the paper and an earlier corresponding patent application, they suggest their technology could be a tool for genome engineering. It was elegant and cheap. A grad student could do it.”

In 2015 it was Doudna herself that called the meeting, this time in Napa, to discuss the ethical ramifications of Crispr. Their biggest concern was what they call germline modifications—the stuff that gets passed on from generation to generation, substantially changing the human forever. In September of 2015, Doudna gave a TED Talk asking the asks the scientific community to pause and discuss the ethics of this new tool before rushing in. On the heels of that, the US National Academy of Sciences said it would work on a set of ”recommendations“ for researchers and scientists to follow. No laws, just recommendations.

Fast forward to July 26, 2017. MIT Technology Review reported:

“The first known attempt at creating genetically modified human embryos in the United States has been carried out by a team of researchers in Portland, Oregon… Although none of the embryos were allowed to develop for more than a few days—and there was never any intention of implanting them into a womb—the experiments are a milestone on what may prove to be an inevitable journey toward the birth of the first genetically modified humans.”

MIT’s article was thin on details because the actual paper that delineated the experiment was not yet published. Then, this week, it was. This time it was, indeed, a germline objective.

“…because any genetically modified child would then pass the changes on to subsequent generations via their own germ cells—the egg and sperm.”(ibid).

All this was led by fringe researcher Shoukhrat Mitalipov of Oregon Health and Science University, and WIRED was quick to provide more info, but in two different articles.

The first of these stories appeared last Friday and gave more specifics on Mitalipov than the actual experiment.

“the same guy who first cloned embryonic stem cells in humans. And came up with three-parent in-vitro fertilization. And moved his research on replacing defective mitochondria in human eggs to China when the NIH declined to fund his work. Throughout his career, Mitalipov has gleefully played the role of mad scientist, courting controversy all along the way (sic).”

In the second article, we discover what the mad scientist was trying to do. In essence, Mitalipov demonstrated a highly efficient replacement of mutated genes like MYBPC3, which is responsible for a heart condition called “hypertrophic cardiomyopathy that affects one in 500 people—the most common cause of sudden death among young athletes.” Highly efficient means that in 42 out of 58 attempts, the problem gene was removed and replaced with a normal one. Mitalipov believes that he can get this to 100%. This means that fixing genetic mutations can be done successfully and maybe even become routine in the near future. But WIRED points out that

“would require lengthy clinical trials—something a rider in the current Congressional Appropriations Act has explicitly forbidden the Food and Drug Administration from even considering.”

Ah, but this is not a problem for our fringe mad scientist.

“Mitalipov said he’d have no problem going elsewhere to run the tests, as he did previously with his three-person IVF work.”

Do w see a pattern here? One surprising thing that the study revealed was that,

“Of the 42 successfully corrected embryos, only one of them used the supplied template to make a normal strand of DNA. When Crispr cut out the paternal copy—the mutant one—it left behind a gap, ready to be rebuilt by the cell’s repair machinery. But instead of grabbing the normal template DNA that had been injected with the sperm and Crispr protein, 41 embryos borrowed the normal maternal copy of MYBPC3 to rebuild its gene.”

In other words, the cell said, thanks for your stinking code but we’ll handle this. It appears as though cellular repair may have a mission plan of its own. That’s the mysterious part that reminds us that there is still something miraculous going on here behind the scenes. Mitalipov thinks he and his team can force these arrogant cells to follow instructions.

So what now? With this we have more evidence that guidelines and recommendations, clear heads and cautionary voices are not enough to stop scientists and researchers on the fringe, governments with dubious ethics, or whoever else might want to give things a whirl.

That puts noble efforts like Asilomar in 1975, a similar conference some years ago on nanotechnology, and one earlier this year on Artificial Intelligence as simply that, noble efforts. Why do these conference occur in the first place? Because scientists are genuinely worried that we’re going to extinct ourselves if we aren’t careful. But technology is racing down the autobahn, folks and we can’t expect the people who stand to become billionaires from their discoveries to be the same people policing their actions.

And this is only one of the many transformative technologies that are looming on the horizon. While everyone is squawking about the Paris Accords, why don’t we marshall some of our righteous indignation and pull the world together to agree on some meaningful oversight of these technologies?

We’ve gone from “What if?” to  “What now?” Are we going to avoid, “Oh, shit!”

  1. https://www.wired.com/2015/07/crispr-dna-editing-2/?mbid=nl_72815

2. http://wp.me/p7yvqL-mt

3. https://www.technologyreview.com/s/608350/first-human-embryos-edited-in-us/?set=608342

4. https://www.wired.com/story/scientists-crispr-the-first-human-embryos-in-the-us-maybe/?mbid=social_twitter_onsiteshare

5. https://www.wired.com/story/first-us-crispr-edited-embryos-suggest-superbabies-wont-come-easy/?mbid=nl_8217_p9&CNDID=49614846

Bookmark and Share

What did one AI say to the other AI?

I’e asked this question before, but this is an entirely new answer.

We may  never know.

Based on a plethora of recent media on artificial intelligence (AI), not only are there a lot of people working on it, but many of those on the leading edge are also concerned with what they don’t understand in the midst of their ominous new creation.

Amazon, DeepMind/Google, Facebook, IBM, and Microsoft teamed up to form the Partnership on AI.(1) Their charter talks about sharing information and being responsible. It includes all of the proper buzz words for a technosocial contract.

“…transparency, security and privacy, values and ethics, collaboration between people and AI systems, interoperability of systems, and of the trustworthiness, reliability, containment, safety, and robustness of the technology.”(2)

They are not alone in this concern, as the EU(3) is also working on AI guidelines and a set of rules on robotics.

Some of what makes them all a bit nervous is the way AI learns, the complexity of neural networks and inability to go back and see how the AI arrived at its conclusion. In other words, how do we know that its recommendation is the right one? Adding to that list is the discovery that AIs working together can create their own languages; languages we don’t speak or understand. In one case, at Facebook, researchers saw this happening and stopped it.

For me, it’s a little disconcerting that Facebook, a social media app is one of those corporations leading the charge and leading the research into AI. That’s a broad topic for another blog, but their underlying objective is to market to you. That’s how they make their money.

To be fair, that is at least part of the motivation for Amazon, DeepMind/Google, IBM, and Microsoft, as well. The better they know you, the more stuff they can sell you. Of course, there are also enormous benefits to medical research as well. Such advantages are almost always what these companies talk about first. AI will save your life, cure cancer and prevent crime.

So, it is somewhat encouraging to see that these companies on the forefront of AI breakthroughs are also acutely aware of how AI could go terribly wrong. Hence we see wording from the Partnership on AI, like

“…Seek out, support, celebrate, and highlight aspirational efforts in AI for socially benevolent applications.”

The key word here is benevolent. But the clear objective is to keep the dialog positive, and

“Create and support opportunities for AI researchers and key stakeholders, including people in technology, law, policy, government, civil liberties, and the greater public, to communicate directly and openly with each other about relevant issues to AI and its influences on people and society.”(2)

I’m reading between the lines, but it seems like the issue of how AI will influence people and society is more of an obligatory statement intended to demonstrate compassionate concern. It’s coming from the people who see huge commercial benefit from the public being at ease with the coming onslaught of AI intrusion.

In their long list of goals, the “influences” on society don’t seem to be a priority. For example, should they discover that particular AI has a detrimental effect on people, that their civil liberties are less secure, would they stop? Probably not.

At the rate that these companies are racing toward AI superiority, the unintended consequences for our society are not a high priority. While these groups are making sure that AI does not decide to kill us, I wonder if they are also looking at how the AI will change us and are those changes a good thing?

(1) https://www.fastcodesign.com/90132632/ai-is-inventing-its-own-perfect-languages-should-we-let-it

(2) https://www.partnershiponai.org/#

(3) http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-//EP//NONSGML%2BCOMPARL%2BPE-582.443%2B01%2BDOC%2BPDF%2BV0//EN

Other pertinent links:



Bookmark and Share

Power sharing?

Just to keep you up to speed, everything is on schedule or ahead of schedule.

In the race toward a superintelligence or ubiquitous AI. If you read this blog or you are paying attention at any level, then you know the fundamentals of AI. But for those of you who don’t here are the basics. Artificial Intelligence comes from processing and analyzing data. Big data. Then programmers feed a gazillion linked-up computers (CPUs) with algorithms that can sort this data and make predictions. This process is what is at work when the Google search engine makes suggestions concerning what you are about to key into the search field. These are called predictive algorithms. If you want to look at pictures of cats, then someone has to task the CPUs with learning what a cat looks like as opposed to a hamster, then scour the Internet for pictures of cats and deliver them to your search. The process of teaching the machine what a cat looks like is called machine learning. There is also an algorithm that watches your online behavior. That’s why, after checking out sunglasses online, you start to see a plethora of ads for sunglasses on just about every page you visit. Similar algorithms can predict where you will drive to today, and when you are likely to return home. There is AI that knows your exercise habits and a ton of other physiological data about you, especially when you’re sharing your Fitbit or other wearable data with the Cloud. Insurance companies extremely interested in this data, so that it can give discounts to “healthy” people and penalize the not so healthy. Someday they might also monitor other “behaviors” that they deem to be not in your best interests (or theirs). Someday, especially if we have a “single-payer” health care system (aka government healthcare), this data may be required before you are insured. Before we go too far into the dark side (which is vast and deep), AI can also search all the cells in your body and identify which ones are dangerous, and target them for elimination. AI can analyze a whole host of things that humans could overlook. It can put together predictions that could save your life.

Googles chips stacked up and ready to go. Photo from WIRED.

Now, with all that AI background behind us, this past week something called Google I/O went down. WIRED calls it Google’s annual State-of-the-Union address. There, Sundar Pichai unveiled something called TPU 2.0 or Cloud TPU. This is something of a breakthrough, because, in the past, the AI process that I just described, even though lighting fast and almost transparent, required all those CPUs, a ton of space (server farms), and gobs of electricity. Now, Google (and others) are packing this processing into chips. These are proprietary to Google. According to WIRED,

“This new processor is a unique creation designed to both train and execute deep neural networks—machine learning systems behind the rapid evolution of everything from image and speech recognition to automated translation to robotics…

…says Chris Nicholson, the CEO, and founder of a deep learning startup called Skymind. “Google is trying to do something better than Amazon—and I hope it really is better. That will mean the whole market will start moving faster.”

Funny, I was just thinking that the market is not moving fast enough. I can hardly wait until we have a Skymind.

“Along those lines, Google has already said that it will offer free access to researchers willing to share their research with the world at large. That’s good for the world’s AI researchers. And it’s good for Google.”

Is it good for us?

This sets up another discussion (in 3 weeks) about a rather absurd opinion piece in WIRED about why we should have an AI as President. These things start out as absurd, but sometimes don’t stay that way.

Bookmark and Share

Humanity is not always pretty.

The Merriam-Webster online dictionary among several options gives this definition for human: “[…]representative of or susceptible to the sympathies and frailties of human nature human kindness a human weakness.”

Then there is humanity which can either confer either the collective of humans, or “[…]the fact or condition of being human; human nature,” or benevolence as in compassion and understanding. For the latter, it seems that we are the eternal optimists when it comes to describing ourselves. Hence, we often refer to the humanity of man as one of our most redeeming traits. At the same time, if we query human nature we can get, “[…]ordinary human behavior, esp considered as less than perfect.” This is a diplomatic way of acknowledging that flaws are a characteristic of our nature. When we talk about our humanity, we presumptively leave out our propensity for greed, pride, and the other deadly sins. We like to think of ourselves as basically good.

If we are honest with ourselves, however, we know this is not always the case and if we push the issue we would have to acknowledge that this not even the case most of the time. Humanity is primarily driven by the kinds of things we don’t like to see in others but rarely see in ourselves. But this is supposed to be a blog about design and tech, isn’t it? So I should get to the point.

A recent article on the blog site QUARTZ, Sarah Kessler’s article, “Algorithms are failing Facebook. Can humanity save it?” poses an interesting question and one that I’ve raised in the past. We like to think that technology will resolve all of our basic human failings—somehow. Recognizing this, back in 1968 Stewart Brand introduced the first Whole Earth Catalog with,

“We are as gods and might as well get good at it.”

After almost 50 years it seems justified to ask whether we’ve made any improvements whatsoever. The question is pertinent in light of Kessler’s article on the advent of Facebook Live. In this particular FB feature, you stream whatever video you want, and it goes out to the whole world instantly. Of course, we need this, right? And we need this now, right? Of course we do.

Like most of these wiz bang technologies they are designed to attract millennials with, “Wow! Cool.” But it is not a simple task. How would a company like Facebook police the potentially billions of feeds coming into the system? The answer is (as is becoming more the case) AI. Artificial Intelligence. Algorithms will recognize and determine what is and is not acceptable to go streaming out to the world. And apparently, Zuck and company were pretty confident that they could pull this off.

Let’s get this thing online. [Photo: http://wersm.com]

Maybe not. Kessler notes that,

“According to a Wall Street Journal tally, more than 50 acts of violence, including murders, suicides, and sexual assault, have been broadcast over Facebook Live since the feature launched 13 months ago.”

Both articles tell how Facebook’s Mark Zuckerberg put a team on “lockdown” to rush the feature to market. What was the hurry, one might ask? And Kessler does ask.

“Let’s make sure there’s a humanitarian angle. Millennials like that.” [Photo: http://variety.com]

After these 13 months of spurious events, the tipping point came with a particularly heinous act that ended up circulating on FB Live for nearly 24 hours. It involved a 20-year-old Thai man named Wuttisan Wongtalay, who filmed himself flinging his 11-month-old daughter off the side of a building with a noose around her neck. Then, off-camera, he killed himself.

“In a status update on his personal Facebook profile, CEO Mark Zuckerberg, himself the father of a young girl, pledged that the company would, among other things, add 3,000 people to the team that reviews Facebook content for violations of the company’s policies.”

Note that the answer is not to remove the feature until things could be sorted out or to admit that the algorithms are not ready for prime time. The somewhat surprising answer is more humans.

Kessler, quoting the Wall Street Journal article states,

“Facebook, in a civic mindset, could have put a plan in place for monitoring Facebook Live for violence, or waited to launch Facebook Live until the company was confident it could quickly respond to abuse. It could have hired the additional 3,000 human content reviewers in advance.

But Facebook ‘didn’t grasp the gravity of the medium,’ an anonymous source familiar with Facebook’s Live’s development told the Wall Street Journal.”

Algorithms are code that helps machines learn. They look at a lot of data, say pictures of guns, and then they learn to identify what a gun is. They are not particularly good at context. They don’t know, for example, whether your video is, “Hey, look at my new gun?” or “Die, scumbag.”

So in addition to algorithms, Zuck has decided that he will put 3,000 humans on the case. Nevertheless, writes Kessler,

“[…]they can’t solve Facebook’s problems on their own. Facebook’s active users comprise about a quarter of the world’s population and outnumber the combined populations of the US and China. Adding another 3,000 workers to the mix to monitor content simply isn’t going to make a meaningful difference. As Zuckerberg put it during a phone call with investors, “No matter how many people we have on the team, we’ll never be able to look at everything.”[Emphasis mine.]

So, I go back to my original question: We need this, right?

There are two things going on here. First is the matter of Facebook not grasping the gravity of the medium (which I see as inexcusable), and the second is how the whole thing came around full circle. Algorithms are supposed to replace humans. Instead we added 3,000 more jobs. Unfortunately, that wasn’t the plan. But it could have been.

Algorithms are undoubtedly here to stay, but not necessarily for every application and humans are still better at interpreting human intent than machines are. All of this underscores my position from previous blogs, that most companies when the issue is whether they get to or stay on top, will not police themselves. They’ll wait until it breaks and then fix it, or try to. The problem is that as algorithms get increasingly more complicated fixing them gets just as tricky.

People are working on this so that designers can see what went wrong, but the technology is not there yet.

And it is not just so that we can determine the difference between porn and breastfeeding. Algorithms are starting to make a lot of high stakes decisions, like autonomous vehicles, autonomous drones, or autonomous (fill in the blank). Until the people who are literally racing each other to be the first step back and ask the tougher questions, these types of unanticipated consequences will be commonplace, especially when the prudent actions like stop and assess are rarely considered. No one wants to stop and assess.

Kessler says it well,

“The combination may be fleeting—the technology will catch up eventually—but it’s also quite fitting given that so many of the problems Facebook is now confronting, from revenge porn to fake news to Facebook Live murders, are themselves the result of humanity mixing with algorithms.” [Emphasis mine.]

We can’t get past that humanity thing.


Bookmark and Share

But nobody knows what better is.

South by Southwest, otherwise known as SXSW calls itself a film and music festival and interactive media conference. It’s held every spring in Austin, Texas. Other than maybe the Las Vegas Consumer Electronics Show or San Diego’s ComicCon, I can’t think of many conferences that generate as much buzz as SXSW. This year it is no different. I will have blog fodder for weeks. Though I can’t speak to the film or music side, I’m sure they were scintillating. Under the category of interactive, most of the buzz is about technology in general, as tech gurus and futurists are always in attendance along with celebs who align themselves to the future.

Once again at SXSW, Ray Kurzweil was on stage. In my blogs, Kurzweil is probably the one guy I quote the most throughout this blog. So here we go again. Two tech sites caught my eye they week, reporting on Kurzweil’s latest prediction that moves up the date of the Singularity from 2045 to 2029; that’s 12 years away. Since we are enmeshed in the world of exponentially accelerating technology, I have encouraged my students to start wrapping their heads around the idea of exponential growth. In our most recent project, it was a struggle just to embrace the idea of how in only seven years we could see transformational change. If Kurzweil is right about his latest prognostication, then 12 years could be a real stunner. In case you are visiting this blog for the first time, the Singularity to which Kurzweil refers is, acknowledged as the point at which computer intelligence exceeds that of human intelligence; it will know more, anticipate more, and analyze more than any human capability. Nick Bostrom calls it the last invention we will ever need to make. We’ve already seen this to some extent with IBM’s Watson beating the pants off a couple of Jeopardy masters and Google’s DeepMind handily beat a Go genius at a game that most thought to be too complex for a computer to handle. Some refer to this “computer” as a superintelligence, and warn that we better be designing the braking mechanism in tandem with the engine, or this smarter-than-us computer may outsmart us in unfortunate ways.

In an article in Scientific American, Northwestern University psychology professor Paul Weber says we are bombarded each day with about 2.5 exabytes of data and that the human brain can only store an estimated 2.5 petabytes (a million gigabytes). Of course, the bombardment will continue to increase. Another voice that emerges in this discussion is Rob High IBM’s vice president and chief technology officer. According to the futurism tech blog, High was part of a panel discussion at the American Institute of Aeronautics and Astronautics (AIAA) SciTech Conference 2017. High said,

“…we have a very desperate need for cognitive computing…The information being produced is far surpassing our ability to consume and make use of…”

On the surface, this seems like a compelling argument for faster, more pervasive computing. But since it is my mission to question otherwise compelling arguments, I want to ask whether we actually need to process 2.5 exabytes of information? It would appear that our existing technology has already turned on the firehose of data (Did we give it permission?) and now it’s up to us to find a way to drink from the firehose. To me, it sounds like we need a regulator, not a bigger gullet. I have observed that the traditional argument in favor of more, better, faster often comes wrapped in the package of help for humankind.

Rob High, again from the futurism article, says,

“‘If you’re a doctor and you’re trying to figure out the best way to treat your patient, you don’t have the time to go read the latest literature and apply that knowledge to that decision’ High explained. ‘In any scenario, we can’t possibly find and remember everything.’ This is all good news, according to High. We need AI systems that can assist us in what we do, particularly in processing all the information we are exposed to on a regular basis — data that’s bound to even grow exponentially in the next couple of years.’”

From another futurism article, Kurzweil uses a similar logic:

“We’re going to be able to meet the physical needs of all humans. We’re going to expand our minds and exemplify these artistic qualities that we value.”

The other rationale that almost always becomes coupled with expanding our minds is that we will be “better.” No one, however, defines what better is. You could be a better jerk. You could be a better rapist or terrorist or megalomaniac. What are we missing exactly, that we have to be smarter, or that Bach, or Mozart are suddenly inferior? Is our quality of life that impoverished? And for those who are impoverished, how does this help them? And what about making us smarter? Smarter at what?

But not all is lost. On a more positive note, futurism in a third article (they were busy this week), reports,

“The K&L Gates Endowment for Ethics and Computational Technologies seeks to introduce the thoughtful discussion on the use of AI in society. It is being established through funding worth $10 million from K&L Gates, one of the United States’ largest law firms, and the money will be used to hire new faculty chairs as well as support three new doctoral students.”

Though I’m not sure whether we can consider this a regulator, rather something to lessen the pain of swallowing.

Finally (for this week), back to Rob High,

“Smartphones are just the tip of the iceberg,” High said. “Human intelligence has its limitations and artificial intelligence is going to evolve in a lot of ways that won’t be similar to human intelligence. But, I think they will work best in the presence of humans.”

So, I’m more concerned with when artificial intelligence is not working at its best.

Bookmark and Share

Disruption. Part 2.


Last week I discussed the idea of technological disruption. Essentially, they are innovations that make fundamental changes in the way we work or live. In turn, these changes affect culture and behavior. Issues of design and culture are the stuff that interests me and my research: how easily and quickly our practices change as a result of the way we enfold technology. The advent of the railroad, mass produced automobiles, radio, then television, the Internet, and the smartphone all qualify as disruptions.

Today, technology advances more quickly. Technological development was never a linear idea, but because most of the tech advances of the last century were at the bottom of the exponential curve, we didn’t notice them. New technologies that are under development right now are going to being realized more quickly (especially the ones with big funding), and because of the idea of convergence, (the intermixing of unrelated technologies) their consequences will be less predictable.

One of my favorite futurists is Amy Webb whom I have written about before. In her most recent newsletter, Amy reminds us that the Internet was clunky and vague long before it was disruptive. She states,

“However, our modern Internet was being built without the benefit of some vital voices: journalists, ethicists, economists, philosophers, social scientists. These outside voices would have undoubtedly warned of the probable rise of botnets, Internet trolls and Twitter diplomacy––would the architects of our modern internet have done anything differently if they’d confronted those scenarios?”

Amy inadvertently left out the design profession, though I’m sure she will reconsider after we chat. Indeed, it is the design profession that is a key contributor to transformative tech and design thinkers, along with the ethicists and economists can help to visualize and reframe future visions.

Amy thinks that voice will be the next transformation will be our voice,

“From here forward, you can be expected to talk to machines for the rest of your life.”

Amy is referring to technologies like Alexa, Siri, Google, Cortana, and something coming soon called Bixby. The voices of these technologies are, of course, only the window dressing for artificial intelligence. But she astutely points out that,

“…we also know from our existing research that humans have a few bad habits. We continue to encode bias into our algorithms. And we like to talk smack to our machines. These machines are being trained not just to listen to us, but to learn from what we’re telling them.”

Such a merger might just be the mix of any technology (name one) with human nature or the human condition: AI meets Mike who lives across the hall. AI becoming acquainted with Mike may have been inevitable, but the fact that Mike happens to be a jerk was less predictable and so the outcome less so. The most significant disruptions of the future are going to come from the convergence of seemingly unrelated technologies. Sometimes innovation depends on convergence, like building an artificial human that will have to master a lot of different functions. Other times, convergence is accidental or at least unplanned. The engineers over at Boston Dynamics who are building those intimidating walking robots are focused a narrower set of criteria than someone creating an artificial human. Perhaps power and agility are their primary concern. Then, in another lab, there are technologists working on voice stress analysis, and in another setting, researchers are looking to create an AI that can choose your wardrobe. Somewhere else we are working on facial recognition or Augmented Reality or Virtual Reality or bio-engineering, medical procedures, autonomous vehicles or autonomous weapons. So it’s a lot like Harry meets Sally, you’re not sure what you’re going to get or how it’s going to work.

Digital visionary Kevin Kelly thinks that AI will be at the core of the next industrial revolution. Place the prefix “smart” in front of anything, and you have a new application for AI: a smart car, a smart house, a smart pump. These seem like universally useful additions, so far. But now let’s add the same prefix to the jobs you and I do, like a doctor, lawyer, judge, designer, teacher, or policeman. (Here’s a possible use for that ominous walking robot.) And what happens when AI writes better code than coders and decides to rewrite itself?

Hopefully, you’re getting the picture. All of this underscores Amy Webb’s earlier concerns. The ‘journalists, ethicists, economists, philosophers, social scientists’ and designers are rarely in the labs where the future is taking place. Should we be doing something fundamentally differently in our plans for innovative futures?

Side note: Convergence can happen in a lot of ways. The parent corporation of Boston Dynamics is X. I’ll use Wikipedia’s definition of X: “X, an American semi-secret research-and-development facility founded by Google in January 2010 as Google X, operates as a subsidiary of Alphabet Inc.”

Bookmark and Share

Of autonomous machines.


Last week we talked about how converging technologies can sometimes yield unpredictable results. One of the most influential players in the development of new technology is DARPA and the defense industry. There is a lot of technological convergence going on in the world of defense. Let’s combine robotics, artificial intelligence, machine learning, bio-engineering, ubiquitous surveillance, social media, and predictive algorithms for starters. All of these technologies are advancing at an exponential pace. It’s difficult to take a snapshot of any one of them at a moment in time and predict where they might be tomorrow. When you start blending them the possibilities become downright chaotic. With each step, it is prudent to ask if there is any meaningful review. What are the ramifications for error as well as success? What are the possibilities for misuse? Who is minding the store? We can hope that there are answers to these questions that go beyond platitudes like, “Don’t stand in the way of progress.”, “Time is of the essence.”, or “We’ll cross that bridge when we come to it.”

No comment.

I bring this up after having seen some unclassified documents on Human Systems, and Autonomous Defense Systems (AKA autonomous weapons). (See a previous blog on this topic.) Links to these documents came from a crowd-funded “investigative journalist” Nafeez Ahmed, publishing on a website called INSURGE intelligence.

One of the documents entitled Human Systems Roadmap is a slide presentation given to the National Defense Industry Association (NDIA) conference last year. The list of agencies involved in that conference and the rest of the documents cited reads like an alphabet soup of military and defense organizations which most of us have never heard of. There are multiple components to the pitch, but one that stands out is “Autonomous Weapons Systems that can take action when needed.” Autonomous weapons are those that are capable of making the kill decision without human intervention. There is also, apparently some focused inquiry into “Social Network Research on New Threats… Text Analytics for Context and Event Prediction…” and “full spectrum social media analysis.” We could get all up in arms about this last feature, but recent incidents in places such as, Benghazi, Egypt, and Turkey had a social networking component that enabled extreme behavior to be quickly mobilized. In most cases, the result was a tragic loss of life. In addition to sharing photos of puppies, social media, it seems, is also good at organizing lynch mobs. We shouldn’t be surprised that governments would want to know how to predict such events in advance. The bigger question is how we should intercede and whether that decision should be made by a human being or a machine.

There are lots of other aspects and lots more documents cited in Ahmed’s lengthy albeit activistic report, but the idea here is that rapidly advancing technology is enabling considerations which were previously held to be science fiction or just impossible. Will we reach the point where these systems are fully operational before we reach the point where we know they are totally safe? It’s a problem when technology grows faster that policy, ethics or meaningful review. And it seems to me that it is always a problem when the race to make something work is more important than the understanding the ramifications if it does.

To be clear, I’m not one of those people who thinks that anything and everything that the military can conceive of is automatically wrong. We will never know how many catastrophes that our national defense services have averted by their vigilance and technological prowess. It should go without saying that the bad guys will get more sophisticated in their methods and tactics, and if we are unable to stay ahead of the game, then we will need to get used to the idea of catastrophe. When push comes to shove, I want the government to be there to protect me. That being said, I’m not convinced that the defense infrastructure (or any part of the tech sector for that matter) is as diligent to anticipate the repercussions of their creations as they are to get them functioning. Only individuals can insist on meaningful review.



Bookmark and Share

Paying attention.

I want to make a Tshirt. On the front, it will say, “7 years is a long time.” On the back, it will say, “Pay attention!”

What am I talking about? I’ll start with some background. This semester, I am teaching a collaborative studio with designers from visual communications, interior design, and industrial design. Our topic is Humane Technologies, and we are examining the effects of an Augmented Reality (AR) system that could be ubiquitous in 7 years. The process began with an immersive scan of the available information and emerging advances in AR, VR, IoT, human augmentation (HA) and, of course, AI. In my opinion, these are a just a few of the most transformative technologies currently attracting the heaviest investment across the globe. And where the money goes there goes the most rapid advancement.

A conversation starter.

One of the biggest challenges for the collaborative studio class (myself included) is to think seven years out. Although we read Kurzweil’s Law of Accelerating Returns, our natural tendency is to think linearly, not exponentially. One of my favorite Kurzweil illustrations is this:

“Exponentials are quite seductive because they start out sub-linear. We sequenced one ten-thousandth of the human genome in 1990 and two ten-thousandths in 1991. Halfway through the genome project, 7 ½ years into it, we had sequenced 1 percent. People said, “This is a failure. Seven years, 1 percent. It’s going to take 700 years, just like we said.” Seven years later it was done, because 1 percent is only seven doublings from 100 percent — and it had been doubling every year. We don’t think in these exponential terms. And that exponential growth has continued since the end of the genome project. These technologies are now thousands of times more powerful than they were 13 years ago, when the genome project was completed.”1

So when I hear a policymaker, say, “We’re a long way from that,” I cringe. We’re not a long way away from that. The iPhone was introduced on June 29, 2007, not quite ten years ago. The ripple-effects from that little technological marvel are hard to catalog. With the smartphone, we have transformed everything from social and behavioral issues to privacy and safety. As my students examine the next possible phase of our thirst for the latest and greatest, AR (and it’s potential for smartphone-like ubiquity), I want them to ask the questions that relate to supporting systems, along with the social and ethical repercussions of these transformations. At the end of it all, I hope that they will walk away with an appreciation for paying attention to what we make and why. For example, why would we make a machine that would take away our job? Why would we build a superintelligence? More often than not, I fear the answer is because we can.

Our focus on the technologies mentioned above is just a start. There are more than these, and we shouldn’t forget things like precise genetic engineering techniques such as CRISPR/Cas9 Gene Editing, neuromorphic technologies such as microprocessors configured like brains, the digital genome that could be the key to disease eradication, machine learning, and robotics.

Though they may sound innocuous by themselves, they each have gigantic implications for disruptions to society. The wild card in all of these is how they converge with each other and the results that no one anticipated. One such mutation would be when autonomous weapons systems (AI + robotics + machine learning) converge with an aggregation of social media activity to predict, isolate and eliminate a viral uprising.

From recent articles and research by the Department of Defense, this is no longer theoretical; we are actively pursuing it. I’ll talk more about that next week. Until then, pay attention.


1. http://www.bizjournals.com/sanjose/news/2016/09/06/exclusivegoogle-singularity-visionary-ray.htm
Bookmark and Share

Now I know that Kurzweil is right.


In a previous blog entitled “Why Kurzweil is probably right,” I made this statement,

“Convergence is the way technology leaps forward. Supporting technologies enable formerly impossible things to become suddenly possible.”

That blog was talking about how we are developing AI systems at a rapid pace. I quoted a WIRED magazine article by David Pierce that was previewing consumer AIs already in the marketplace and some of the advancements on the way. Pierce said that a personal agent is,

“…only fully useful when it’s everywhere when it can get to know you in multiple contexts—learning your habits, your likes and dislikes, your routine and schedule. The way to get there is to have your AI colonize as many apps and devices as possible.”

Then, I made my usual cautionary comment about how such technologies will change us. And they will. So, if you follow this blog, you know that I throw cold water onto technological promises as a matter of course. I do this because I believe that someone has to.

Right now I’m preparing my collaborative design studio course. We’re going to be focusing on AR and VR, but since convergence is an undeniable influence on our techno-social future, we will have to keep AI, human augmentation, the Internet of Things, and a host of other emerging technologies on the desktop as well. In researching the background for this class, I read three articles from Peter Diamandis for the Singularity Hub website. I’ve written about Peter before, as well. He’s brilliant. He’s also a cheerleader for the Singularity. So that being said, these articles, one on the Internet of Everything (IoE/IoT), Artificial Intelligence (AI), and another on Augmented and Virtual Reality (AR/VR), are full of promises. Most of what we thought of as science fiction, even a couple of years ago are now happening with such speed that Diamandis and his cohorts believe they are imminent in only three years. And by that I mean commonplace.

If that isn’t enough for us to sit up and take notice, then I am reminded of an article from the Silicon Valley Business Journal, another interview with Ray Kurzweil. Kurzweil, of course, has pretty much convinced us all by now that the Law of Accelerating Returns is no longer hyperbole. If anyone thought that it was only hype, sheer observation should have brought them to their senses. In this article,
Kurzweil gives this excellent illustration of how exponential growth actually plays out—no longer as a theory but—as demonstrable practice.

“Exponentials are quite seductive because they start out sub-linear. We sequenced one ten-thousandth of the human genome in 1990 and two ten-thousandths in 1991. Halfway through the genome project, 7 ½ years into it, we had sequenced 1 percent. People said, “This is a failure. Seven years, 1 percent. It’s going to take 700 years, just like we said.” Seven years later it was done because 1 percent is only seven doublings from 100 percent — and it had been doubling every year. We don’t think in these exponential terms. And that exponential growth has continued since the end of the genome project. These technologies are now thousands of times more powerful than they were 13 years ago when the genome project was completed.”

When you combine that with the nearly exponential chaos of hundreds of other converging technologies, indeed the changes to our world and behavior are coming at us like a bullet-train. Ask any Indy car driver, when things are happening that fast, you have to be paying attention.
But when the input is like a firehose and the motivations are unknown, how on earth do we do that?

Personally, I see this as a calling for design thinkers worldwide. Those in the profession, schooled in the ways of design thinking have been espousing our essential worth to realm of wicked problems for some time now. Well, problems don’t get more wicked than this.

Maybe we can design an AI that could keep us from doing stupid things with technologies that we can make but cannot yet comprehend the impact of.

Bookmark and Share