Tag Archives: TensorFlow

Paying attention.

I want to make a Tshirt. On the front, it will say, “7 years is a long time.” On the back, it will say, “Pay attention!”

What am I talking about? I’ll start with some background. This semester, I am teaching a collaborative studio with designers from visual communications, interior design, and industrial design. Our topic is Humane Technologies, and we are examining the effects of an Augmented Reality (AR) system that could be ubiquitous in 7 years. The process began with an immersive scan of the available information and emerging advances in AR, VR, IoT, human augmentation (HA) and, of course, AI. In my opinion, these are a just a few of the most transformative technologies currently attracting the heaviest investment across the globe. And where the money goes there goes the most rapid advancement.

A conversation starter.

One of the biggest challenges for the collaborative studio class (myself included) is to think seven years out. Although we read Kurzweil’s Law of Accelerating Returns, our natural tendency is to think linearly, not exponentially. One of my favorite Kurzweil illustrations is this:

“Exponentials are quite seductive because they start out sub-linear. We sequenced one ten-thousandth of the human genome in 1990 and two ten-thousandths in 1991. Halfway through the genome project, 7 ½ years into it, we had sequenced 1 percent. People said, “This is a failure. Seven years, 1 percent. It’s going to take 700 years, just like we said.” Seven years later it was done, because 1 percent is only seven doublings from 100 percent — and it had been doubling every year. We don’t think in these exponential terms. And that exponential growth has continued since the end of the genome project. These technologies are now thousands of times more powerful than they were 13 years ago, when the genome project was completed.”1

So when I hear a policymaker, say, “We’re a long way from that,” I cringe. We’re not a long way away from that. The iPhone was introduced on June 29, 2007, not quite ten years ago. The ripple-effects from that little technological marvel are hard to catalog. With the smartphone, we have transformed everything from social and behavioral issues to privacy and safety. As my students examine the next possible phase of our thirst for the latest and greatest, AR (and it’s potential for smartphone-like ubiquity), I want them to ask the questions that relate to supporting systems, along with the social and ethical repercussions of these transformations. At the end of it all, I hope that they will walk away with an appreciation for paying attention to what we make and why. For example, why would we make a machine that would take away our job? Why would we build a superintelligence? More often than not, I fear the answer is because we can.

Our focus on the technologies mentioned above is just a start. There are more than these, and we shouldn’t forget things like precise genetic engineering techniques such as CRISPR/Cas9 Gene Editing, neuromorphic technologies such as microprocessors configured like brains, the digital genome that could be the key to disease eradication, machine learning, and robotics.

Though they may sound innocuous by themselves, they each have gigantic implications for disruptions to society. The wild card in all of these is how they converge with each other and the results that no one anticipated. One such mutation would be when autonomous weapons systems (AI + robotics + machine learning) converge with an aggregation of social media activity to predict, isolate and eliminate a viral uprising.

From recent articles and research by the Department of Defense, this is no longer theoretical; we are actively pursuing it. I’ll talk more about that next week. Until then, pay attention.

 

1. http://www.bizjournals.com/sanjose/news/2016/09/06/exclusivegoogle-singularity-visionary-ray.htm
Bookmark and Share

Big-Data Algorithms. Don’t worry. Be happy.

 

It’s easier for us to let the data decide for us. At least that is the idea behind global digital design agency Huge. Aaron Shapiro is the CEO. He says, “The next big breakthrough in design and technology will be the creation of products, services, and experiences that eliminate the needless choices from our lives and make ones on our behalf, freeing us up for the ones we really care about: Anticipatory design.”

Buckminster Fuller wrote about Anticipatory Design Science, but this is not that. Trust me. Shapiro’s version is about allowing big data, by way of artificial intelligence and neural networks, to become so familiar with us and our preferences that it anticipates what we need to do next. In this vision, I don’t have to decide what to wear, or eat, or how to get to work, or when to buy groceries, or gasoline, what color trousers go with my shoes and also when it’s time to buy new shoes. No decisions will be necessary. Interestingly, Shapiro sees this as a good thing. The idea comes from a flurry of activity about something called decision fatigue. What is that? In a nutshell, it says that our decision-making capacity is a reservoir that gradually gets depleted the more decisions we make, possibly as a result of body chemistry. After a long string of decisions, according to the theory, we are more likely to make a bad decision or none at all. Things like willpower disintegrate along with our decision-making.

Among the many articles in the last few months on this topic was FastCompany, who wrote that,

“Anticipatory design is fundamentally different: decisions are made and executed on behalf of the user. The goal is not to help the user make a decision, but to create an ecosystem where a decision is never made—it happens automatically and without user input. The design goal becomes one where we eliminate as many steps as possible and find ways to use data, prior behaviors and business logic to have things happen automatically, or as close to automatic as we can get.”

Supposedly this frees “us up for the ones we really care about.”
My questions are, who decides which questions are important? And once we are freed from making decisions, will we even know that we have missed on that we really care about?

Google Now is a digital assistant that not only responds to a user’s requests and questions, but predicts wants and needs based on search history. Pulling flight information from emails, meeting times from calendars and providing recommendations of where to eat and what to do based on past preferences and current location, the user simply has to open the app for their information to compile.”

It’s easy to forget that AI as we currently know it goes under the name of Facebook or Google or Apple or Amazon. We tend to think of AI as some ghostly future figure or a bank of servers, or an autonomous robot. It reminds me a bit of my previous post about Nick Bostrom and the development of SuperIntelligence. Perhaps it is a bit like an episode of Person of Interest. As we think about designing systems that think for us and decide what is best for us, it might be a good idea to think about what it might be like to no longer think—as long as we still can.

 

 

Bookmark and Share

Superintelligence. Is it the last invention we will ever need to make?

I believe it is crucial that we move beyond preparation to adapt or react to the future but to actively engage in shaping it.

An excellent example of this kind of thinking is Nick Bostrom’s TED talk from 2015.

Bostrom is concerned about the day when machine intelligence exceeds human intelligence (the guess is somewhere between twenty and thirty years from now). He points out that, “Once there is super-intelligence, the fate of humanity may depend on what the super-intelligence does. Think about it: Machine intelligence is the last invention that humanity will ever need to make. Machines will then be better at inventing [designing] than we are, and they’ll be doing so on digital timescales.”

His concern is legitimate. How do we control something that is smarter than we are? Anticipating AI will require more strenuous design thinking than that which produces the next viral game, app, or service. But these applications are where the lion’s share of the money is going. When it comes to keeping us from being at best irrelevant or at worst an impediment to AI, Bostrom is guardedly optimistic about how we can approach it. He thinks we could, “[…]create an A.I. that uses its intelligence to learn what we value, and its motivation system is constructed in such a way that it is motivated to pursue our values or to perform actions that it predicts we would approve of.”

At the crux of his argument and mine: “Here is the worry: Making superintelligent A.I. is a really hard challenge. Making superintelligent A.I. that is safe involves some additional challenge on top of that. The risk is that if somebody figures out how to crack the first challenge without also having cracked the additional challenge of ensuring perfect safety.”

Beyond machine learning (which has many facets), there are a wide-ranging set of technologies, from genetic engineering to drone surveillance, to next-generation robotics, and even VR, that could be racing forward without someone thinking about this “additional challenge.”

This could be an excellent opportunity for designers. But, to do that, we will have to broaden our scope to engage with science, engineering, and politics. More on that in future blogs.

Bookmark and Share

Nine years from now.

 

Today I’m on my soapbox, again, as an advocate of design thinking, of which design fiction is part of the toolbox.

In 2014, the Pew Research Center published a report on Digital Life in 2025. Therein, “The report covers experts’ views about advances in artificial intelligence (AI) and robotics, and their impact on jobs and employment.” Their nutshell conclusion was that:

“Experts envision automation and intelligent digital agents permeating vast areas of our work and personal lives by 2025, (9 years from now), but they are divided on whether these advances will displace more jobs than they create.”

On the upside, some of the “experts” believe that we will, as the brilliant humans that we are, invent new kinds of uniquely human work that we can’t replace with AI—a return to an artisanal society with more time for leisure and our loved ones. Some think we will be freed from the tedium of work and find ways to grow in some other “socially beneficial” pastime. Perhaps we will just be chillin’ with our robo-buddy.

On the downside, there are those who believe that not only blue-collar, robotic, jobs will vanish, but also white-collar, thinking jobs, and that will leave a lot of people out of work since there are only so many jobs as clerks at McDonald’s or greeters at Wal-Mart. They think that some of these are the fault of education for not better preparing us for the future.

A few weeks ago I blogged about people who are thinking about addressing these concerns with something called Universal Basic Income (UBI), a $12,000 gift to everyone in the world since everyone will be out of work. I’m guessing (though it wasn’t explicitly stated) that this money would come from all the corporations that are raking in the bucks by employing the AI’s, the robots and the digital agents, but who don’t have anyone on the payroll anymore. The advocates of this idea did not address whether the executives at these companies, presumably still employed, will make more than $12,000, nor whether the advocates themselves were on the 12K list. I guess not. They also did not specify who would buy the services that these corporations were offering if we are all out of work. But I don’t want to repeat that rant here.

I’m not as optimistic about the unique capabilities of humankind to find new, uniquely human jobs in some new, utopian artisanal society. Music, art, and blogs are already being written by AI, by the way. I do agree, however, that we are not educating our future decision-makers to adjust adequately to whatever comes along. The process of innovative design thinking is a huge hedge against technology surprise, but few schools have ever entertained the notion and some have never even heard of it. In some cases, it has been adopted, but as a bastardized hybrid to serve business-as-usual competitive one-upmanship.

I do believe that design, in its newest and most innovative realizations, is the place for these anticipatory discussions and future. What we need is thinking that encompasses a vast array of cross-disciplinary input, including philosophy and religion, because these issues are more than black and white, they are ethically and morally charged, and they are inseparable from our culture—the scaffolding that we as a society use to answer our most existential questions. There is a lot of work to do to survive ourselves.

 

 

Bookmark and Share

Sitting around with my robo-muse and collecting a check.

 

Writing a weekly blog can be a daunting task especially amid teaching, research and, of course, the ongoing graphic novel. I can only imagine the challenge for those who do it daily. Thank goodness for friends who send me articles. This week the piece comes from The New York Times tech writer Farhad Manjoo. The article is entitled, “A Plan in Case Robots Take the Jobs: Give Everyone a Paycheck.” The topic follows nicely on the heels of last week’s blog about the inevitability of robot-companions. Unfortunately, both the author and the people behind this idea appear to be woefully out of touch with reality.

Here is the premise: After robots and AI have become ubiquitous and mundane, what will we do with ourselves? “How will society function after humanity has been made redundant? Technologists and economists have been grappling with this fear for decades, but in the last few years, one idea has gained widespread interest — including from some of the very technologists who are now building the bot-ruled future,” asks Manjoo.

The answer, strangely enough, seems to be coming from venture capitalists and millionaires like Albert Wenger, who is writing a book on the idea of U.B. I. — universal basic income — and Sam Altman, president of the tech incubator Y Combinator. Apparently, they think that $1,000 a month would be about right, “…about enough to cover housing, food, health care and other basic needs for many Americans.”

This equation, $12,000 per year, possibly works for the desperately poor in rural Mississippi. Perhaps it is intended for some 28-year-old citizen with no family or social life. Of course, there would be no money for that iPhone, or cable service. Such a mythical person has a $300 rent controlled apartment (utilities included), benefits covered by the government, doesn’t own a car, or buy gas or insurance, and then maybe doesn’t eat either. Though these millionaires clearly have no clue about what it costs the average American to eek out a living, they have identified some other fundamental questions:

“When you give everyone free money, what do people do with their time? Do they goof off, or do they try to pursue more meaningful pursuits? Do they become more entrepreneurial? How would U.B.I. affect economic inequality? How would it alter people’s psychology and mood? Do we, as a species, need to be employed to feel fulfilled, or is that merely a legacy of postindustrial capitalism?”

The Times article continues with, “Proponents say these questions will be answered by research, which in turn will prompt political change. For now, they argue the proposal is affordable if we alter tax and welfare policies to pay for it, and if we account for the ways technological progress in health care and energy will reduce the amount necessary to provide a basic cost of living.”

Often, the people that float ideas like this paint them as utopia, but I have a couple of additional questions. Why are venture capitalists interested in this notion? Will they also reduce their income to $1,000 per month? Seriously, that never happens. Instead, we see progressives in government and finance using an equation like this: “One thousand for you. One hundred thousand for me. One thousand for you. One hundred thousand for me…”

Fortunately, it is an unlikely scenario, because it would not move us toward equality but toward a permanent under-class forever dependent on those who have. Scary.

Bookmark and Share

Your robot buddy is almost ready.

 

There is an impressive video out this week showing a rather adept robot going through the paces, so to speak, getting smacked down and then getting back up again. The year is 2016, and the technology is striking. The company is Boston Dynamics. One thing you know from reading this blog is that I ascribe to The Law of Accelerating Returns. So, as we watch this video, if we want to be hyper-critical we can see that the bot still needs some shepherding, tentatively handles the bumpy terrain, and is slow to get up after a violent shove to the floor. On the other hand, if you are at all impressed with the achievement, then you know that the people working on this will only make it better, agiler, more svelt, and probably faster on the comeback. Let’s call this ingredient one.

 

Last year I devoted several blogs to the advancement of AI, about the corporations rushing to be the go-to source for the advanced version of predictive behavior and predictive decision-making. I have also discussed systems like Amazon Echo that use the advanced Alexa Voice Service. It’s something like Siri on steroids. Here’s part of Amazon’s pitch:

“• Hears you from across the room with far-field voice recognition, even while music is playing
• Answers questions, reads audiobooks and the news, reports traffic and weather, gives info on local businesses, provides sports scores and schedules, and more with Alexa, a cloud-based voice service
• Controls lights, switches, and thermostats with compatible WeMo, Philips Hue, Samsung SmartThings, Wink, Insteon, and ecobee smart home devices
•Always getting smarter and adding new features and skills…”

You’re supposed to place it in a central position in the home where it is proficient at picking up your voice from the other room. Just don’t go too far. The problem with Echo is that its stationery. Call Echo ingredient two. What Echo needs is ingredient one.

Some of the biggest players in the AI race right now are Google, Amazon, Apple, Facebook, IBM, Elon Musk, and the military, but the government probably won’t give you a lot of details on that. All of these billion-dollar companies have a vested interest in machine learning and predictive algorithms. Not the least of which is Google. Google already uses Voice Search, “OK, Google.” to enable type-free searching. And their AI algorithm software TensorFlow has been open-sourced. The better that a machine can learn, the more reliable the Google autonomous vehicle will be. Any one of these folks could be ingredient three.

I’m going to try and close the loop on this, at least for today. Guess who bought Boston Dynamics last year? That would be corporation X, formerly known as Google X.

 

Bookmark and Share

A facebook of a different color.

The tech site Ars Technica recently ran an article on the proliferation of a little-known app called Facewatch. According to the articles writer Sebastian Anthony, “Facewatch is a system that lets retailers, publicans, and restaurateurs easily share private CCTV footage with the police and other Facewatch users. In theory, Facewatch lets you easily report shoplifters to the police, and to share the faces of generally unpleasant clients, drunks, etc. with other Facewatch users.” The idea is that retailers or officials can look out for these folks and either keep an eye on them or just ask them to leave. The system, in use in the UK, appears to have a high rate of success.

 

The story continues. Of course, all technologies eventually converge, so now you don’t have to “keep and eye out” for ner-do-wells your CCTV can do it for you. NeoFace from NEC works with the Facewatch list to do the scouting for you. According to NECs website: “NEC’s NeoFace Watch solution is specifically designed to integrate with existing surveillance systems by extracting faces in real time… and matching against a watch list of individuals.” In this case, it would be the Facewatch database. Ars’ Anthony, makes this connection: “In the film Minority Report, people are rounded up by the Precrime police agency before they actually commit the crime…with Facewatch, and you pretty much have the same thing: a system that automatically tars people with a criminal brush, irrespective of dozens of important variables.”

Anthony points out that,

“Facewatch lets you share ‘subjects of interest’ with other Facewatch users even if they haven’t been convicted. If you look at the shop owner in a funny way, or ask for the service charge to be removed from your bill, you might find yourself added to the ‘subject of interest’ list.”

The odds of an innocent being added to the watchlist are quite good. Malicious behavior aside, you could be logged as you wander past a government protest, forget your PIN number too many times at the ATM, or simply look too creepy in your Ray Bans and hoody.

The story underscores a couple of my past rants. First, we don’t make laws to protect against things that are impossible, so when the impossible happens, we shouldn’t be surprised that there isn’t a law to protect against it.1 It is another red flag that technology is moving, too fast and as it converges with other technologies it becomes radically unpredictable. Second, that technology moves faster than politics, moves faster than policy, and often faster than ethics.2

There are a host personal apps, many which are available to our iPhones or Androids that are on the precarious line between legal and illegal, curious and invasive. And there are more to come.

 

1 Quoting Selinger from Wood, David. “The Naked Future — A World That Anticipates Your Every Move.” YouTube. YouTube, 15 Dec. 2013. Web. 13 Mar. 2014.
2. Quoting Richards from Farivar, Cyrus. “DOJ Calls for Drone Privacy Policy 7 Years after FBI’s First Drone Launched.” Ars Technica. September 27, 2013. Accessed March 13, 2014. http://arstechnica.com/tech-policy/2013/09/doj-calls-for-drone-privacy-policy-7-years-after-fbis-first-drone-launched/.
Bookmark and Share

The foreseeable future.

From my perspective, the two most disruptive technologies of the next ten years will be a couple of acronyms: VR and AI. Virtual Reality will transform the way people learn, and their diversions. It will play an increasing role in entertainment and gaming to the extent that many will experience some confusion and conflict with actual reality. Make sure you see last week’s blog for more on this. Between VR and AI so much is happening that these could easily outnumber a host of other topics to discuss on this site next year. Today, I’ll begin the discussion with AI, but both technologies fall into my broader topic of the foreseeable future.

One of my favorite quotes of 2014 (seems like ancient history now) was from an article in Ars Technica by Cyrus Farivar 1. It was a drone story about FBI proliferation to the tune of $5 million that occurred gradually over the period of 10 years, almost unnoticed. Farivar cites a striking quote from Neil Richards, a law professor at Washington University in St. Louis: “We don’t write laws to protect against impossible things, so when the impossible becomes possible, we shouldn’t be surprised that the law doesn’t protect against it…” I love that quote because we are continually surprised that we did not anticipate one thing or the other. Much of this surprise I believe, comes from experts who tell us that this or that won’t happen in the foreseeable future. One of these experts, Miles Brundage, a Ph.D. student at Arizona State, was quoted recently in an article in WIRED. About AI that could surpass human intelligence, Brundage said,

“At the point where we are today, no AI system is at all capable of taking over the world—and won’t be for the foreseeable future.”

There are two things that strike me about these kinds of statements. First is the obvious fact that no one can see the future in the first place, and secondly that the clear implication is, that it will happen, just not yet. It also suggests that we shouldn’t be concerned; it’s too far away. This article was about Elon Musk is open-sourcing something called OpenAI. According to Nathaniel Wood reporting for WIRED, OpenAI is deep-learning code that Musk and his investors want to share with the world, for free. This news comes on the heels of Google’s open-sourcing of their AI code called TensorFlow, immediately followed by a Facebook announcement that they would be sharing their BigSur server hardware. As the article points out, this is not all magnanimous altruism. By opening the door to formerly proprietary software or hardware folks like Musk and companies like Google and Facebook stand to gain. They gain by recruiting talent, and by exponentially increasing development through free outsourcing. A thousand people working with your code are much better than the hundreds inside your building. Here are two very important factors that folks like Brundage don’t take into consideration. First, these people are in a race and, through outsourcing or open-sourcing their stuff they are enlisting people to help them in the race. Secondly, there is that term, exponential. I use it most often when I refer to Kurzweil’s Law of Accelerating Returns. It is exactly these kinds of developments that make his prediction so believable. So maybe the foreseeable future is not that far away after all.

All this being said the future is not foreseeable, and the exponential growth in areas like VR and AI will continue. The WIRED article continues with this commentary on AI, (which we all know):

“Deep learning relies on what are called neural networks, vast networks of software and hardware that approximate the web of neurons in the human brain. Feed enough photos of a cat into a neural net, and it can learn to recognize a cat. Feed it enough human dialogue, and it can learn to carry on a conversation. Feed it enough data on what cars encounter while driving down the road and how drivers react, and it can learn to drive.”

Despite their benevolence, this is why Musk and Facebook and Google are in the race. Musk is quick to add that while his motives have an air of transparency to them, it is also true that the more people who have access to deep-learning software, the less likely that one guy will have a monopoly on it.

Musk is a smart guy. He knows that AI could be a blessing or a curse. Open sourcing is his hedge. It could be a good thing… for the foreseeable future.

 

1. Farivar, Cyrus. “DOJ Calls for Drone Privacy Policy 7 Years after FBI’s First Drone Launched.” Ars Technica. September 27, 2013. Accessed March 13, 2014. http://arstechnica.com/tech-policy/2013/09/doj-calls-for-drone-privacy-policy-7-years-after-fbis-first-drone-launched/.
Bookmark and Share