Power sharing?

Just to keep you up to speed, everything is on schedule or ahead of schedule.

In the race toward a superintelligence or ubiquitous AI. If you read this blog or you are paying attention at any level, then you know the fundamentals of AI. But for those of you who don’t here are the basics. Artificial Intelligence comes from processing and analyzing data. Big data. Then programmers feed a gazillion linked-up computers (CPUs) with algorithms that can sort this data and make predictions. This process is what is at work when the Google search engine makes suggestions concerning what you are about to key into the search field. These are called predictive algorithms. If you want to look at pictures of cats, then someone has to task the CPUs with learning what a cat looks like as opposed to a hamster, then scour the Internet for pictures of cats and deliver them to your search. The process of teaching the machine what a cat looks like is called machine learning. There is also an algorithm that watches your online behavior. That’s why, after checking out sunglasses online, you start to see a plethora of ads for sunglasses on just about every page you visit. Similar algorithms can predict where you will drive to today, and when you are likely to return home. There is AI that knows your exercise habits and a ton of other physiological data about you, especially when you’re sharing your Fitbit or other wearable data with the Cloud. Insurance companies extremely interested in this data, so that it can give discounts to “healthy” people and penalize the not so healthy. Someday they might also monitor other “behaviors” that they deem to be not in your best interests (or theirs). Someday, especially if we have a “single-payer” health care system (aka government healthcare), this data may be required before you are insured. Before we go too far into the dark side (which is vast and deep), AI can also search all the cells in your body and identify which ones are dangerous, and target them for elimination. AI can analyze a whole host of things that humans could overlook. It can put together predictions that could save your life.

Googles chips stacked up and ready to go. Photo from WIRED.

Now, with all that AI background behind us, this past week something called Google I/O went down. WIRED calls it Google’s annual State-of-the-Union address. There, Sundar Pichai unveiled something called TPU 2.0 or Cloud TPU. This is something of a breakthrough, because, in the past, the AI process that I just described, even though lighting fast and almost transparent, required all those CPUs, a ton of space (server farms), and gobs of electricity. Now, Google (and others) are packing this processing into chips. These are proprietary to Google. According to WIRED,

“This new processor is a unique creation designed to both train and execute deep neural networks—machine learning systems behind the rapid evolution of everything from image and speech recognition to automated translation to robotics…

…says Chris Nicholson, the CEO, and founder of a deep learning startup called Skymind. “Google is trying to do something better than Amazon—and I hope it really is better. That will mean the whole market will start moving faster.”

Funny, I was just thinking that the market is not moving fast enough. I can hardly wait until we have a Skymind.

“Along those lines, Google has already said that it will offer free access to researchers willing to share their research with the world at large. That’s good for the world’s AI researchers. And it’s good for Google.”

Is it good for us?

Note:
This sets up another discussion (in 3 weeks) about a rather absurd opinion piece in WIRED about why we should have an AI as President. These things start out as absurd, but sometimes don’t stay that way.

Bookmark and Share

Humanity is not always pretty.

The Merriam-Webster online dictionary among several options gives this definition for human: “[…]representative of or susceptible to the sympathies and frailties of human nature human kindness a human weakness.”

Then there is humanity which can either confer either the collective of humans, or “[…]the fact or condition of being human; human nature,” or benevolence as in compassion and understanding. For the latter, it seems that we are the eternal optimists when it comes to describing ourselves. Hence, we often refer to the humanity of man as one of our most redeeming traits. At the same time, if we query human nature we can get, “[…]ordinary human behavior, esp considered as less than perfect.” This is a diplomatic way of acknowledging that flaws are a characteristic of our nature. When we talk about our humanity, we presumptively leave out our propensity for greed, pride, and the other deadly sins. We like to think of ourselves as basically good.

If we are honest with ourselves, however, we know this is not always the case and if we push the issue we would have to acknowledge that this not even the case most of the time. Humanity is primarily driven by the kinds of things we don’t like to see in others but rarely see in ourselves. But this is supposed to be a blog about design and tech, isn’t it? So I should get to the point.

A recent article on the blog site QUARTZ, Sarah Kessler’s article, “Algorithms are failing Facebook. Can humanity save it?” poses an interesting question and one that I’ve raised in the past. We like to think that technology will resolve all of our basic human failings—somehow. Recognizing this, back in 1968 Stewart Brand introduced the first Whole Earth Catalog with,

“We are as gods and might as well get good at it.”

After almost 50 years it seems justified to ask whether we’ve made any improvements whatsoever. The question is pertinent in light of Kessler’s article on the advent of Facebook Live. In this particular FB feature, you stream whatever video you want, and it goes out to the whole world instantly. Of course, we need this, right? And we need this now, right? Of course we do.

Like most of these wiz bang technologies they are designed to attract millennials with, “Wow! Cool.” But it is not a simple task. How would a company like Facebook police the potentially billions of feeds coming into the system? The answer is (as is becoming more the case) AI. Artificial Intelligence. Algorithms will recognize and determine what is and is not acceptable to go streaming out to the world. And apparently, Zuck and company were pretty confident that they could pull this off.

Let’s get this thing online. [Photo: http://wersm.com]

Maybe not. Kessler notes that,

“According to a Wall Street Journal tally, more than 50 acts of violence, including murders, suicides, and sexual assault, have been broadcast over Facebook Live since the feature launched 13 months ago.”

Both articles tell how Facebook’s Mark Zuckerberg put a team on “lockdown” to rush the feature to market. What was the hurry, one might ask? And Kessler does ask.

“Let’s make sure there’s a humanitarian angle. Millennials like that.” [Photo: http://variety.com]

After these 13 months of spurious events, the tipping point came with a particularly heinous act that ended up circulating on FB Live for nearly 24 hours. It involved a 20-year-old Thai man named Wuttisan Wongtalay, who filmed himself flinging his 11-month-old daughter off the side of a building with a noose around her neck. Then, off-camera, he killed himself.

“In a status update on his personal Facebook profile, CEO Mark Zuckerberg, himself the father of a young girl, pledged that the company would, among other things, add 3,000 people to the team that reviews Facebook content for violations of the company’s policies.”

Note that the answer is not to remove the feature until things could be sorted out or to admit that the algorithms are not ready for prime time. The somewhat surprising answer is more humans.

Kessler, quoting the Wall Street Journal article states,

“Facebook, in a civic mindset, could have put a plan in place for monitoring Facebook Live for violence, or waited to launch Facebook Live until the company was confident it could quickly respond to abuse. It could have hired the additional 3,000 human content reviewers in advance.

But Facebook ‘didn’t grasp the gravity of the medium,’ an anonymous source familiar with Facebook’s Live’s development told the Wall Street Journal.”

Algorithms are code that helps machines learn. They look at a lot of data, say pictures of guns, and then they learn to identify what a gun is. They are not particularly good at context. They don’t know, for example, whether your video is, “Hey, look at my new gun?” or “Die, scumbag.”

So in addition to algorithms, Zuck has decided that he will put 3,000 humans on the case. Nevertheless, writes Kessler,

“[…]they can’t solve Facebook’s problems on their own. Facebook’s active users comprise about a quarter of the world’s population and outnumber the combined populations of the US and China. Adding another 3,000 workers to the mix to monitor content simply isn’t going to make a meaningful difference. As Zuckerberg put it during a phone call with investors, “No matter how many people we have on the team, we’ll never be able to look at everything.”[Emphasis mine.]

So, I go back to my original question: We need this, right?

There are two things going on here. First is the matter of Facebook not grasping the gravity of the medium (which I see as inexcusable), and the second is how the whole thing came around full circle. Algorithms are supposed to replace humans. Instead we added 3,000 more jobs. Unfortunately, that wasn’t the plan. But it could have been.

Algorithms are undoubtedly here to stay, but not necessarily for every application and humans are still better at interpreting human intent than machines are. All of this underscores my position from previous blogs, that most companies when the issue is whether they get to or stay on top, will not police themselves. They’ll wait until it breaks and then fix it, or try to. The problem is that as algorithms get increasingly more complicated fixing them gets just as tricky.

People are working on this so that designers can see what went wrong, but the technology is not there yet.

And it is not just so that we can determine the difference between porn and breastfeeding. Algorithms are starting to make a lot of high stakes decisions, like autonomous vehicles, autonomous drones, or autonomous (fill in the blank). Until the people who are literally racing each other to be the first step back and ask the tougher questions, these types of unanticipated consequences will be commonplace, especially when the prudent actions like stop and assess are rarely considered. No one wants to stop and assess.

Kessler says it well,

“The combination may be fleeting—the technology will catch up eventually—but it’s also quite fitting given that so many of the problems Facebook is now confronting, from revenge porn to fake news to Facebook Live murders, are themselves the result of humanity mixing with algorithms.” [Emphasis mine.]

We can’t get past that humanity thing.

 

Bookmark and Share

Are you listening to the Internet of Things? Someone is.

As usual, it is a toss up for what I should write about this week. Is it, WIRED’s article on the artificial womb, FastCo’s article on design thinking, the design fiction world of the movie The Circle, or WIRED’s warning about apps using your phone’s microphone to listen for ultrasonic marketing ‘beacons’ that you can’t hear? Tough call, but I decided on a different WIRED post that talked about the vision of Zuckerberg’s future at F8. Actually, the F8 future is a bit like The Circle anyway so I might be killing two birds with one stone.

At first, I thought the article titled, “Look to Zuck’s F8, Not Trump’s 100 Days, to See the Shape of the Future,” would be just another Trump bashing opportunity, (which I sometimes think WIRED prefers more than writing about tech) but not so. It was about tech, mostly.

The article, written by Zachary Karabell starts out with this quote,

“While the fate of the Trump administration certainly matters, it may shape the world much less decisively in the long-term than the tectonic changes rapidly altering the digital landscape.”

I believe this statement is dead-on, but I would include the entire “technological” landscape. The stage is becoming increasingly “set,” as the article continues,

“At the end of March, both the Senate and the House voted to roll back broadband privacy regulations that had been passed by the Federal Communications Commission in 2016. Those would have required internet service providers to seek customers’ explicit permission before selling or sharing their browsing history.”

Combine that with,

“Facebook[s] vision of 24/7 augmented reality with sensors, camera, and chips embedded in clothing, everyday objects, and eventually the human body…”

and the looming possibility of ending net neutrality, we could be setting ourselves up for the real Circle future.

“A world where data and experiences are concentrated in a handful of companies with what will soon be trillion dollar capitalizations risks being one where freedom gives way to control.”

To add kindling to this thicket, there is the Quantified Self movement (QS). According to their website,

“Our mission is to support new discoveries about ourselves and our communities that are grounded in accurate observation and enlivened by a spirit of friendship.”

Huh? Ok. But they want to do this using “self-tracking tools.” This means sensors. They could be in wearables or implantables or ingestibles. Essentially, they track you. Presumably, this is all so that we become more self-aware, and more knowledgeable about our selves and our behaviors. Health, wellness, anxiety, depression, concentration; the list goes on. Like many emerging movements that are linked to technologies, we open the door through health care or longevity, because it is an easy argument that being healty or fit is better than sick and out of shape. But that is all too simple. QS says that that we gain “self knowledge through numbers,” and in the digital age that means data. In a climate that is increasingly less regulatory about what data can be shared and with whom, this could be the beginings of the perfect storm.

As usual, I hope I’m wrong.

 

 

 

Bookmark and Share