Tag Archives: Mark Zuckerberg

How should we talk about the future?

 

Imagine that there are two camps. One camp holds high confidence that the future will be manifestly bright and promising in all aspects of human endeavor. Our health will dramatically improve as we eradicate disease and possibly even death. Artificial Intelligence will be at our beck and call to make our tough decisions, order our lives, fight our wars, watch over us, and keep us safe. Hence, it is full speed ahead. The positives outweigh the negatives. Any missteps will be but a minor hiccup, and we’ll cross those bridges when we come to them.

The second camp believes that many of these promises are achievable. But they also believe that we are beginning to see strong evidence that technology is indeed moving exponentially and that we are at a trajectory point in the curve that where will see what many experts have categorized as impossible or a “long way off” now is knocking at our door.

Kurzweil’s Law of Accelerating Returns, is proving remarkably accurate. Sure we adapted from the horse and buggy to the automobile, and from there to air travel, to an irritatingly resilient nuclear threat, to computers, and smartphones and DNA sequencing. But these changes are arriving more rapidly than their predecessors.

“‘As exponential growth continues to accelerate into the first half of the twenty-first century,’ [Kurzweil] writes. ‘It will appear to explode into infinity, at least from the limited and linear perspective of contemporary humans.’”1

The second camp sees this rapid-fire proliferation as alarming. Not because we will get to utopia faster, but because we will be standing in the midst of a host of disruptive technologies all coming to fruition at the same time without the benefit of meaningful oversight or the engagement of our societies.

I am in the second camp.

Last week, I talked about genetic engineering. The designer-baby question was always pushed aside as a long way off. Not anymore. That’s just one change. Our privacy, in the form of “big data,” from seemingly innocent pastimes such as Facebook, is being severely compromised. According to security technologist Bruce Schneier,

“Facebook can predict race, personality, sexual orientation, political ideology, relationship status, and drug use on the basis of Like clicks alone. The company knows you’re engaged before you announce it, and gay before you come out—and its postings may reveal that to other people without your knowledge or permission. Depending on the country you live in, that could merely be a major personal embarrassment—or it could get you killed.”

Facebook is just one of the seemingly benign things we do every day. By now, most of us consider that using our smartphones 75 percent of our day is also harmless, though we would also have to agree that it has changed us personally, behaviorally, and societally. And while the societal outcry against designer babies has been noticeable since last weeks stories about CrisprCas9 gene splicing with human embryos, how long will it be before we accept it as the norm, and feel pressure in our own families to participate to stay competitive, or maybe even just to be insured.

The fact is that we like to think that we can adapt to anything. To some extent, we pride ourselves on this resilience. Unfortunately, that seems to suggest that we are also powerless to affect these technologies and that we have no say in when, if, or whether we should make them in the first place. Should we be proud of the fact that we are adapting to a complete lack of privacy, to the likelihood of terrorism or being replaced by an AI? These are my questions.

So I am encouraged when others also raise these questions. Recently, the tech media which seems to be perpetually enamored of folks like Mark Zuckerberg and Elon Musk, called Zuckerberg a “bad futurist” because of his over optimistic view of the future.

The article came from the Huffington post’s Rebecca Searles.
According to Searles,

“Elon Musk’s doomsday AI predictions aren’t “irresponsible,” but Mark Zuckerberg’s techno-optimism is.”3

According to a Zuckerberg podcast,

“…people who are arguing for slowing down the process of
building AI, I just find that really questionable… If you’re arguing against AI, then you’re arguing against safer cars that aren’t going to have accidents and you’re arguing against being able to better diagnose people when they’re sick.”3

Technology hawks are always promising safer, and healthier as their rationale for unimpeded acceleration. I’m sure that’s the rah-rah rationale for designer babies, too. Think of all the illnesses we will be able to breed out of the human race. Searles and I agree that negative outcomes deserve equally serious consideration as well, and not after they happen. As she aptly puts it,

“Tackling tech challenges with a build-it-and-see-what-happens approach (a la Zuckerberg’s former “move fast and break things” development mantra) just isn’t suitable for AI.”

The problem is, that Zuckerberg is not alone, nor is last weeks
Shoukhrat Mitalipov. Ultimately, this reality of two camps is the rationale behind my approach to design fiction. As you know, the objective of design fiction is to provoke. Promising utopia is rarely the tinder to fuel a provocation.

Let’s remember Charles Dickens’ story of Ebenezer Scrooge. The ghost of Christmas past takes him back in time where, for the first time, he sees the truth about his past. But this revelation does not change him. Then the ghost of Christmas present opens his eyes to everything around him that he is blind to in the present. Still, Scrooge is unaffected. And finally, the ghost of Christmas future takes him into the future, and it is here that Scrooge sees the days to come as “the way it will be” unless he changes something now.

Somehow, I think the outcome would have been different if that last ghost said, ”Don’t worry. You’ll adapt.”

Let’s not talk about the future in purely utopian terms nor total doom-and-gloom. The future will not be like one or the other any more than is the present day. But let us not be blind to our infinite capacity to foul things up, to the potential of bad actors or the inevitability of unanticipated consequences. If we have any hope of meeting our future with the altruistic image of a utopian society, let us go forward with eyes open.

 

1. http://www.businessinsider.com/ray-kurzweil-law-of-accelerating-returns-2015-5

2. “Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World”

3. http://www.huffingtonpost.com/entry/mark-zuckerberg-is-a-bad-futurist_us_5979295ae4b09982b73761f0

Bookmark and Share

Ethical tech.

Though I tinge most of my blogs with ethical questions, the last time I brought up this topic specifically on this was back in 2015. I guess I am ready to give it another go. Ethics is a tough topic. If we deal with this purely superficially, ethics would seem natural, like common sense, or the right thing to do. But if that’s the case, why do so many people do the wrong thing? Things get even more complicated if we move into institutionally complex issues like banking, or governing, technology, genetics, health care or national defense, just to name a few.

The last time I wrote about this, I highlighted Michael Sandel Professor of Philosophy and Government at Harvard’s Law School, where he teaches a wildly popular course called “Justice.” Then, I was glad to see that the big questions were still being addressed in in places like Harvard. Some of his questions then, which came from a FastCo article, were:

“Is it right to take from the rich and give to the poor? Is it right to legislate personal safety? Can torture ever be justified? Should we try to live forever? Buy our way to the head of the line? Create perfect children?”

These are undoubtedly important and prescient questions to ask, especially as we are beginning to confront technologies that make things which were formerly inconceivable or plain impossible, not only possible but likely.

So I was pleased to see last month, an op-ed piece in WIRED by Susan Liautaud founder of The Ethics Incubator. Susan is about as closely aligned to my tech concerns as anyone I have read. And she brings solid thinking to the issues.

“Technology is approaching the man-machine and man-animal
boundaries. And with this, society may be leaping into humanity defining innovation without the equivalent of a constitutional convention to decide who should have the authority to decide whether, when, and how these innovations are released into society. What are the ethical ramifications? What checks and balances might be important?”

Her comments are right in line with my research and co-research into Humane Technologies. Liataud continues:

“Increasingly, the people and companies with the technological or scientific ability to create new products or innovations are de facto making policy decisions that affect human safety and society. But these decisions are often based on the creator’s intent for the product, and they don’t always take into account its potential risks and unforeseen uses. What if gene-editing is diverted for terrorist ends? What if human-pig chimeras mate? What if citizens prefer to see birds rather than flying cars when they look out a window? (Apparently, this is a real risk. Uber plans to offer flight-hailing apps by 2020.) What if Echo Look leads to mental health issues for teenagers? Who bears responsibility for the consequences?”

For me, the answer to that last question is all of us. We should not rely on business and industry to make these decisions, nor expect our government to do it. We have to become involved in these issues at the public level.

Michael Sandel believes that the public is hungry for these issues, but we tend to shy away from them. They can be confrontational and divisive, and no one wants to make waves or be politically incorrect. That’s a mistake.

An image from the future. A student design fiction project that examined ubiquitous AR.

So while the last thing I want is a politician or CEO making these decisions, these two constituencies could do the responsible thing and create forums for these discussions so that the public can weigh in on them. To do anything less, borders on arrogance.

Ultimately we will have to demand this level of thought, beginning with ourselves. This responsibility should start with anticipatory methodologies that examine the social, cultural and behavioral ramifications, and unintended consequences of what we create.

But we should not fight this alone. Corporations and governments concerned with appearing sensitive and proactive toward the environment and social justice need to add a new pillar to their edifice as responsible global citizens: humane technology.

 

Bookmark and Share

Humanity is not always pretty.

The Merriam-Webster online dictionary among several options gives this definition for human: “[…]representative of or susceptible to the sympathies and frailties of human nature human kindness a human weakness.”

Then there is humanity which can either confer either the collective of humans, or “[…]the fact or condition of being human; human nature,” or benevolence as in compassion and understanding. For the latter, it seems that we are the eternal optimists when it comes to describing ourselves. Hence, we often refer to the humanity of man as one of our most redeeming traits. At the same time, if we query human nature we can get, “[…]ordinary human behavior, esp considered as less than perfect.” This is a diplomatic way of acknowledging that flaws are a characteristic of our nature. When we talk about our humanity, we presumptively leave out our propensity for greed, pride, and the other deadly sins. We like to think of ourselves as basically good.

If we are honest with ourselves, however, we know this is not always the case and if we push the issue we would have to acknowledge that this not even the case most of the time. Humanity is primarily driven by the kinds of things we don’t like to see in others but rarely see in ourselves. But this is supposed to be a blog about design and tech, isn’t it? So I should get to the point.

A recent article on the blog site QUARTZ, Sarah Kessler’s article, “Algorithms are failing Facebook. Can humanity save it?” poses an interesting question and one that I’ve raised in the past. We like to think that technology will resolve all of our basic human failings—somehow. Recognizing this, back in 1968 Stewart Brand introduced the first Whole Earth Catalog with,

“We are as gods and might as well get good at it.”

After almost 50 years it seems justified to ask whether we’ve made any improvements whatsoever. The question is pertinent in light of Kessler’s article on the advent of Facebook Live. In this particular FB feature, you stream whatever video you want, and it goes out to the whole world instantly. Of course, we need this, right? And we need this now, right? Of course we do.

Like most of these wiz bang technologies they are designed to attract millennials with, “Wow! Cool.” But it is not a simple task. How would a company like Facebook police the potentially billions of feeds coming into the system? The answer is (as is becoming more the case) AI. Artificial Intelligence. Algorithms will recognize and determine what is and is not acceptable to go streaming out to the world. And apparently, Zuck and company were pretty confident that they could pull this off.

Let’s get this thing online. [Photo: http://wersm.com]

Maybe not. Kessler notes that,

“According to a Wall Street Journal tally, more than 50 acts of violence, including murders, suicides, and sexual assault, have been broadcast over Facebook Live since the feature launched 13 months ago.”

Both articles tell how Facebook’s Mark Zuckerberg put a team on “lockdown” to rush the feature to market. What was the hurry, one might ask? And Kessler does ask.

“Let’s make sure there’s a humanitarian angle. Millennials like that.” [Photo: http://variety.com]

After these 13 months of spurious events, the tipping point came with a particularly heinous act that ended up circulating on FB Live for nearly 24 hours. It involved a 20-year-old Thai man named Wuttisan Wongtalay, who filmed himself flinging his 11-month-old daughter off the side of a building with a noose around her neck. Then, off-camera, he killed himself.

“In a status update on his personal Facebook profile, CEO Mark Zuckerberg, himself the father of a young girl, pledged that the company would, among other things, add 3,000 people to the team that reviews Facebook content for violations of the company’s policies.”

Note that the answer is not to remove the feature until things could be sorted out or to admit that the algorithms are not ready for prime time. The somewhat surprising answer is more humans.

Kessler, quoting the Wall Street Journal article states,

“Facebook, in a civic mindset, could have put a plan in place for monitoring Facebook Live for violence, or waited to launch Facebook Live until the company was confident it could quickly respond to abuse. It could have hired the additional 3,000 human content reviewers in advance.

But Facebook ‘didn’t grasp the gravity of the medium,’ an anonymous source familiar with Facebook’s Live’s development told the Wall Street Journal.”

Algorithms are code that helps machines learn. They look at a lot of data, say pictures of guns, and then they learn to identify what a gun is. They are not particularly good at context. They don’t know, for example, whether your video is, “Hey, look at my new gun?” or “Die, scumbag.”

So in addition to algorithms, Zuck has decided that he will put 3,000 humans on the case. Nevertheless, writes Kessler,

“[…]they can’t solve Facebook’s problems on their own. Facebook’s active users comprise about a quarter of the world’s population and outnumber the combined populations of the US and China. Adding another 3,000 workers to the mix to monitor content simply isn’t going to make a meaningful difference. As Zuckerberg put it during a phone call with investors, “No matter how many people we have on the team, we’ll never be able to look at everything.”[Emphasis mine.]

So, I go back to my original question: We need this, right?

There are two things going on here. First is the matter of Facebook not grasping the gravity of the medium (which I see as inexcusable), and the second is how the whole thing came around full circle. Algorithms are supposed to replace humans. Instead we added 3,000 more jobs. Unfortunately, that wasn’t the plan. But it could have been.

Algorithms are undoubtedly here to stay, but not necessarily for every application and humans are still better at interpreting human intent than machines are. All of this underscores my position from previous blogs, that most companies when the issue is whether they get to or stay on top, will not police themselves. They’ll wait until it breaks and then fix it, or try to. The problem is that as algorithms get increasingly more complicated fixing them gets just as tricky.

People are working on this so that designers can see what went wrong, but the technology is not there yet.

And it is not just so that we can determine the difference between porn and breastfeeding. Algorithms are starting to make a lot of high stakes decisions, like autonomous vehicles, autonomous drones, or autonomous (fill in the blank). Until the people who are literally racing each other to be the first step back and ask the tougher questions, these types of unanticipated consequences will be commonplace, especially when the prudent actions like stop and assess are rarely considered. No one wants to stop and assess.

Kessler says it well,

“The combination may be fleeting—the technology will catch up eventually—but it’s also quite fitting given that so many of the problems Facebook is now confronting, from revenge porn to fake news to Facebook Live murders, are themselves the result of humanity mixing with algorithms.” [Emphasis mine.]

We can’t get past that humanity thing.

 

Bookmark and Share

Are you listening to the Internet of Things? Someone is.

As usual, it is a toss up for what I should write about this week. Is it, WIRED’s article on the artificial womb, FastCo’s article on design thinking, the design fiction world of the movie The Circle, or WIRED’s warning about apps using your phone’s microphone to listen for ultrasonic marketing ‘beacons’ that you can’t hear? Tough call, but I decided on a different WIRED post that talked about the vision of Zuckerberg’s future at F8. Actually, the F8 future is a bit like The Circle anyway so I might be killing two birds with one stone.

At first, I thought the article titled, “Look to Zuck’s F8, Not Trump’s 100 Days, to See the Shape of the Future,” would be just another Trump bashing opportunity, (which I sometimes think WIRED prefers more than writing about tech) but not so. It was about tech, mostly.

The article, written by Zachary Karabell starts out with this quote,

“While the fate of the Trump administration certainly matters, it may shape the world much less decisively in the long-term than the tectonic changes rapidly altering the digital landscape.”

I believe this statement is dead-on, but I would include the entire “technological” landscape. The stage is becoming increasingly “set,” as the article continues,

“At the end of March, both the Senate and the House voted to roll back broadband privacy regulations that had been passed by the Federal Communications Commission in 2016. Those would have required internet service providers to seek customers’ explicit permission before selling or sharing their browsing history.”

Combine that with,

“Facebook[s] vision of 24/7 augmented reality with sensors, camera, and chips embedded in clothing, everyday objects, and eventually the human body…”

and the looming possibility of ending net neutrality, we could be setting ourselves up for the real Circle future.

“A world where data and experiences are concentrated in a handful of companies with what will soon be trillion dollar capitalizations risks being one where freedom gives way to control.”

To add kindling to this thicket, there is the Quantified Self movement (QS). According to their website,

“Our mission is to support new discoveries about ourselves and our communities that are grounded in accurate observation and enlivened by a spirit of friendship.”

Huh? Ok. But they want to do this using “self-tracking tools.” This means sensors. They could be in wearables or implantables or ingestibles. Essentially, they track you. Presumably, this is all so that we become more self-aware, and more knowledgeable about our selves and our behaviors. Health, wellness, anxiety, depression, concentration; the list goes on. Like many emerging movements that are linked to technologies, we open the door through health care or longevity, because it is an easy argument that being healty or fit is better than sick and out of shape. But that is all too simple. QS says that that we gain “self knowledge through numbers,” and in the digital age that means data. In a climate that is increasingly less regulatory about what data can be shared and with whom, this could be the beginings of the perfect storm.

As usual, I hope I’m wrong.

 

 

 

Bookmark and Share

Augmented evidence. It’s a logical trajectory.

A few weeks ago I gushed about how my students killed it at a recent guerrilla future enactment on a ubiquitous Augmented Reality (AR) future. Shortly after that, Mark Zuckerberg announced the Facebook AR platform. The AR uses the camera on your smartphone, and according to a recent WIRED article, transforms your smartphone into an AR engine.

Unfortunately, as we all know, (and so does Zuck), the smartphone isn’t currently much of an engine. AR requires a lot of processing, and so does the AI that allows it to recognize the real world so it can layer additional information on top of it. That’s why Facebook (and others), are building their own neural network chips so that the platform doesn’t have to run to the Cloud to access the processing required for Artificial Intelligence (AI). That will inevitably happen which will make the smartphone experience more seamless, but that’s just part the challenge for Facebook.

If you add to that the idea that we become even more dependent on looking at our phones while we are walking or worse, driving, (think Pokemon GO), then this latest announcement is, at best, foreshadowing.

As the WIRED article continues, tech writer Brian Barrett talked to Blair MacIntyre, from Georgia Tech who says,

“The phone has generally sucked for AR because holding it up and looking through it is tiring, awkward, inconvenient, and socially unacceptable,” says MacIntyre. Adding more of it doesn’t solve those issues. It exacerbates them. (The exception might be the social acceptability part; as MacIntyre notes, selfies were awkward until they weren’t.)”

That last part is an especially interesting point. I’ll have to come back to that in another post.

My students did considerable research on exactly this kind of early infancy that technologies undergo on their road to ubiquity. In another WIRED article, even Zuckerberg admitted,

“We all know where we want this to get eventually,” said Zuckerberg in his keynote. “We want glasses, or eventually contact lenses, that look and feel normal, but that let us overlay all kinds of information and digital objects on top of the real world.”

So there you have it. Glasses are the end game, but as my students agreed, contact lenses not so much. Think about it. If you didn’t have to stick a contact lens in your eyeball, you wouldn’t and the idea that they could become ubiquitous (even if you solved the problem of computing inside a wafer thin lens and the myriad of problems with heat, and in-eye-time), they are much farther away, if ever.

Student design team from Ohio State’s Collaborative Studio.

This is why I find my student’s solution so much more elegant and a far more logical trajectory. According to Barrett,

“The optimistic timeline for that sort of tech, though, stretches out to five or 10 years. In the meantime, then, an imperfect solution takes the stage.”

My students locked it down to seven years.

Finally, Zuckerberg made this statement:

“Augmented reality is going to help us mix the digital and physical in all new ways,” said Zuckerberg at F8. “And that’s going to make our physical reality better.”

Except that Zuck’s version of better and mine or yours may not be the same. Exactly what is wrong with reality anyway?

If you want to see the full-blown presentation of what my students produced, you can view it at aughumana.net.

Note: Currently the AugHumana experience is superior on Google Chrome.  If you are a Safari or Firefox purest, you may have to wait for the page to load (up to 2 minutes). We’re working on this. So, just use Chrome this time. We hope to have it fixed soon.

Bookmark and Share

When the stakes are low, mistakes are beneficial. In more weighty pursuits, not so much.

 

I’m from the old school. I suppose, that sentence alone makes me seem like a codger. Let’s call it the eighties. Part of the art of problem solving was to work toward a solution and get it as tight as we possibly could before we committed to implementation. It was called the design process and today it’s called “design thinking.” So it was heresy to me when I found myself, some years ago now, in a high-tech corporation where this was a doctrine ignored. I recall a top-secret, new product meeting in which the owner and chief technology officer said, “We’re going to make some mistakes on this, so let’s hurry up and make them.” He was not speaking about iterative design, which is part and parcel of the design process, he was talking about going to market with the product and letting the users illuminate what we should fix. Of course, the product was safe and met all the legal standards, but it was far from polished. The idea was that mass consumer trial-by-fire would provide us with an exponentially higher data return than if we tested all the possible permutations in a lab at headquarters. He was, apparently, ahead of his time.

In a recent FastCo article on Facebook’s race to be the leader in AI, author Daniel Terdiman cites some of Mark Zuckerberg’s mantras: “‘Move fast and break things,’ or ‘Done is better than perfect.’” We can debate this philosophically or maybe even ethically, but it is clearly today’s standard procedure for new technologies, new science and the incessant race to be first. Here is a quote from that article:

“Artificial intelligence has become a vital part of scaling Facebook. It’s already being used to recognize the faces of your friends in photographs, and curate your newsfeed. DeepText, an engine for reading text that was unveiled last week, can understand “with near-human accuracy” the content in thousands of posts per second, in more than 20 different languages. Soon, the text will be translated into a dozen different languages, automatically. Facebook is working on recognizing your voice and identifying people inside of videos so that you can fast forward to the moment when your friend walks into view.”

The story goes on to say that Facebook, though it is pouring tons of money into AI, is behind the curve, having begun only three or so years ago. Aside from the fact that FB’s accomplishments seem fairly impressive (at least to me), people like Google and Microsoft are apparently way ahead. In the case of Microsoft, the effort began more than twenty years ago.

Today, the hurry up is accelerated by open sourcingWikipedia explains the benefits of open sourcing as:

“The open-source model, or collaborative development from multiple independent sources, generates an increasingly more diverse scope of design perspective than any one company is capable of developing and sustaining long term.”

The idea behind open sourcing is that the mistakes will happen even faster along with the advancements. It is becoming the de facto approach to breakthrough technologies. If fast is the primary, maybe even the only goal, it is a smart strategy. Or is it a touch short sighted? As we know, not everyone who can play with the code that a company has given them has that company’s best interests in mind. As for the best interests of society, I’m not sure those are even on the list.

To examine our motivations and the ripples that emanate from them, of course, is my mission with design fiction and speculative futures. Whether we like it or not, a by-product of technological development—aside from utopia—is human behavior. There are repercussions from the things we make and the systems that evolve from them. When your mantra is “Move fast and break things,” that’s what you’ll get. But there is certainly no time the move-fast loop to consider the repercussions of your actions, or the unexpected consequences. Consequences will appear all by themselves.

The technologists tell us that when we reach the holy grail of AI (whatever that is), we will be better people and solve the world’s most challenging problems. But in reality, it’s not that simple. With the nuances of AI, there are potential problems, or mistakes, that could be difficult to fix; new predicaments that humans might not be able to solve and AI may not be inclined to resolve on our behalf.

In the rush to make mistakes, how grave will they be? And, who is responsible?

Bookmark and Share