Tag Archives: artificial intelligence

The genius panel has some serious concerns.

Occasionally in preparing this blog, there are troughs in the technology newsfeed. But not now, and maybe never again. So it is with technology that accelerates exponentially. This idea, by the way, is a concept of which I will no longer try to convince my readers. I’m going to stop referencing why Kurzweil’s theorem, that technology advances exponentially is no longer a theorem and just move forward with the assumption that you know that it is. If you don’t agree,  then scout backwards—probably six months of previous blogs—and you’ll be on the same page. From here on, technology advances exponentially! With that being said, we are also no longer at the base of the exponential curve. We are beginning a steep climb.

Last week I highlighted Kurzweil’s upgraded prediction on the Singularity (12 years). I agree, though now I think he may be underselling things. It could easily arrive before that.

Today’s blog comes from a hot tip from one of my students. At the beginning of each semester, I always turn my students on to the idea of GoogleAlerts. It works like this: You tell Google to send you anything and everything on whatever topic interests you. Then, anytime there is news online that fits your topic, you get an email with a list of links from Google. The emails can be inundating so choose your search wisely. At any rate, my student who drank the GoogleAlert kool-aid sent me a link to a panel discussion that took place in January of 2017. The panel convened at something called Beneficial AI 2017 in Asilomar, California. And what a panel it was. Get this: Bart Selman (Cornell), David Chalmers (NYU), Elon Musk (Tesla, SpaceX), Jaan Tallinn (CSER/FLI), Nick Bostrom (FHI), Ray Kurzweil (Google), Stuart Russell (Berkeley), Sam Harris, Demis Hassabis (DeepMind). Sam is a philosopher, author, neuroscientist and noted secularist. I’ve cited nearly all of these characters before in blogs or research papers, so to see them all on one panel was, for me, amazing.

L to R: Elon Musk, Stuart Russell , Bart Selman, Ray Kurzweil, David Chalmers, Nick Bostrom, Demis Hassabis, Sam Harris, Jaan Tallinn.

 

Why were they there? The Future of Life Institute (FLI) organized the BAI 2017 event:

“In our sequel to the 2015 Puerto Rico AI conference, we brought together an amazing group of AI researchers from academia and industry, and thought leaders in economics, law, ethics, and philosophy for five days dedicated to beneficial AI.”

FLI works together with CSER. (The Centre for the Study of Existential Risk). I confess that I was not aware of either organization, but this is encouraging. For example, CSER’s mission is stated as

“[…]within the University of Cambridge dedicated to the study and mitigation of human extinction-level risks that may emerge from technological advances and human activity.”

FLI describes themselves thus:

“We are a charity and outreach organization working to ensure that tomorrow’s most powerful technologies are beneficial for humanity […] We are currently focusing on keeping artificial intelligence beneficial and we are also exploring ways of reducing risks from nuclear weapons and biotechnology.”

Both organizations are loaded with scientists and technologists including Steven Hawking, Bostrom, and Musk.

The panel of genius’ got off to a rocky start because there weren’t enough microphones to go around. Duh. But then things got interesting. The topic of safe AI or what these fellows refer to as AGI, Artificial General Intelligence, is a deep well fraught with promise and doom. The encouraging thing is that these organizations realize the potential for either, the discomforting thing is that they’re genuinely concerned.

As I have discussed before, this race to a superintelligence which Kurzweil moved up to 2029 a few weeks ago, is moving full speed ahead and it is climbing in a steep exponential incline. It is likely that we will be able to build it long before we have figured out how to keep it from destroying us. I’m on record as saying that even the notion of a superintelligence is an error in judgment. If what you want to do is cure disease, aging, and save the planet, why not stop short of full-tilt superintelligence. Surely you get a very, very, very intelligent AI to give you what you want and go no further. After hearing the panel discussion, however, I see this as naive. As Kurzweil stated in the discussion,

“…there really isn’t a foolproof technical solution to this… If you have an AI that is more intelligent than you and is out for your destruction, it’s out for the world’s destruction, and there is no other AI that is superior to it, that’s a bad situation. So that’s the specter […] Imagine that we’ve done our job perfectly, and we’ve created the most safe, beneficial AI possible, but we’ve let the political system become totalitarian and evil, either an evil world government or just a portion of the globe, that is that way, it’s not going to work out well. So part of the struggle is in the area of politics and policy to have the world reflect the values we want to achieve. Human AI is by definition at human levels and therefore is human. So the issue is, ‘How do we make humans ethical?’ is the same issue as, ‘How we make AIs that are at human level, ethical?’”

So there we have the problem of human nature, again. If we can’t fix ourselves if we can’t even agree on what’s broken, how can we build a benevolent god? Fortunately, brilliant minds are honestly concerned about this but that doesn’t mean they’re going to put on the brakes. It was stated in full agreement by the panel: a superintelligence is inevitable. If we don’t build it, someone else will.

It is also safe to assume that our super ethical AI won’t have the same ethics as someone else’s AI. Hence, Kurzweil’s specter. I could turn this into an essay, but I’ll stop here for now. What do you think?

Bookmark and Share

But nobody knows what better is.

South by Southwest, otherwise known as SXSW calls itself a film and music festival and interactive media conference. It’s held every spring in Austin, Texas. Other than maybe the Las Vegas Consumer Electronics Show or San Diego’s ComicCon, I can’t think of many conferences that generate as much buzz as SXSW. This year it is no different. I will have blog fodder for weeks. Though I can’t speak to the film or music side, I’m sure they were scintillating. Under the category of interactive, most of the buzz is about technology in general, as tech gurus and futurists are always in attendance along with celebs who align themselves to the future.

Once again at SXSW, Ray Kurzweil was on stage. In my blogs, Kurzweil is probably the one guy I quote the most throughout this blog. So here we go again. Two tech sites caught my eye they week, reporting on Kurzweil’s latest prediction that moves up the date of the Singularity from 2045 to 2029; that’s 12 years away. Since we are enmeshed in the world of exponentially accelerating technology, I have encouraged my students to start wrapping their heads around the idea of exponential growth. In our most recent project, it was a struggle just to embrace the idea of how in only seven years we could see transformational change. If Kurzweil is right about his latest prognostication, then 12 years could be a real stunner. In case you are visiting this blog for the first time, the Singularity to which Kurzweil refers is, acknowledged as the point at which computer intelligence exceeds that of human intelligence; it will know more, anticipate more, and analyze more than any human capability. Nick Bostrom calls it the last invention we will ever need to make. We’ve already seen this to some extent with IBM’s Watson beating the pants off a couple of Jeopardy masters and Google’s DeepMind handily beat a Go genius at a game that most thought to be too complex for a computer to handle. Some refer to this “computer” as a superintelligence, and warn that we better be designing the braking mechanism in tandem with the engine, or this smarter-than-us computer may outsmart us in unfortunate ways.

In an article in Scientific American, Northwestern University psychology professor Paul Weber says we are bombarded each day with about 2.5 exabytes of data and that the human brain can only store an estimated 2.5 petabytes (a million gigabytes). Of course, the bombardment will continue to increase. Another voice that emerges in this discussion is Rob High IBM’s vice president and chief technology officer. According to the futurism tech blog, High was part of a panel discussion at the American Institute of Aeronautics and Astronautics (AIAA) SciTech Conference 2017. High said,

“…we have a very desperate need for cognitive computing…The information being produced is far surpassing our ability to consume and make use of…”

On the surface, this seems like a compelling argument for faster, more pervasive computing. But since it is my mission to question otherwise compelling arguments, I want to ask whether we actually need to process 2.5 exabytes of information? It would appear that our existing technology has already turned on the firehose of data (Did we give it permission?) and now it’s up to us to find a way to drink from the firehose. To me, it sounds like we need a regulator, not a bigger gullet. I have observed that the traditional argument in favor of more, better, faster often comes wrapped in the package of help for humankind.

Rob High, again from the futurism article, says,

“‘If you’re a doctor and you’re trying to figure out the best way to treat your patient, you don’t have the time to go read the latest literature and apply that knowledge to that decision’ High explained. ‘In any scenario, we can’t possibly find and remember everything.’ This is all good news, according to High. We need AI systems that can assist us in what we do, particularly in processing all the information we are exposed to on a regular basis — data that’s bound to even grow exponentially in the next couple of years.’”

From another futurism article, Kurzweil uses a similar logic:

“We’re going to be able to meet the physical needs of all humans. We’re going to expand our minds and exemplify these artistic qualities that we value.”

The other rationale that almost always becomes coupled with expanding our minds is that we will be “better.” No one, however, defines what better is. You could be a better jerk. You could be a better rapist or terrorist or megalomaniac. What are we missing exactly, that we have to be smarter, or that Bach, or Mozart are suddenly inferior? Is our quality of life that impoverished? And for those who are impoverished, how does this help them? And what about making us smarter? Smarter at what?

But not all is lost. On a more positive note, futurism in a third article (they were busy this week), reports,

“The K&L Gates Endowment for Ethics and Computational Technologies seeks to introduce the thoughtful discussion on the use of AI in society. It is being established through funding worth $10 million from K&L Gates, one of the United States’ largest law firms, and the money will be used to hire new faculty chairs as well as support three new doctoral students.”

Though I’m not sure whether we can consider this a regulator, rather something to lessen the pain of swallowing.

Finally (for this week), back to Rob High,

“Smartphones are just the tip of the iceberg,” High said. “Human intelligence has its limitations and artificial intelligence is going to evolve in a lot of ways that won’t be similar to human intelligence. But, I think they will work best in the presence of humans.”

So, I’m more concerned with when artificial intelligence is not working at its best.

Bookmark and Share

Disruption. Part 2.

 

Last week I discussed the idea of technological disruption. Essentially, they are innovations that make fundamental changes in the way we work or live. In turn, these changes affect culture and behavior. Issues of design and culture are the stuff that interests me and my research: how easily and quickly our practices change as a result of the way we enfold technology. The advent of the railroad, mass produced automobiles, radio, then television, the Internet, and the smartphone all qualify as disruptions.

Today, technology advances more quickly. Technological development was never a linear idea, but because most of the tech advances of the last century were at the bottom of the exponential curve, we didn’t notice them. New technologies that are under development right now are going to being realized more quickly (especially the ones with big funding), and because of the idea of convergence, (the intermixing of unrelated technologies) their consequences will be less predictable.

One of my favorite futurists is Amy Webb whom I have written about before. In her most recent newsletter, Amy reminds us that the Internet was clunky and vague long before it was disruptive. She states,

“However, our modern Internet was being built without the benefit of some vital voices: journalists, ethicists, economists, philosophers, social scientists. These outside voices would have undoubtedly warned of the probable rise of botnets, Internet trolls and Twitter diplomacy––would the architects of our modern internet have done anything differently if they’d confronted those scenarios?”

Amy inadvertently left out the design profession, though I’m sure she will reconsider after we chat. Indeed, it is the design profession that is a key contributor to transformative tech and design thinkers, along with the ethicists and economists can help to visualize and reframe future visions.

Amy thinks that voice will be the next transformation will be our voice,

“From here forward, you can be expected to talk to machines for the rest of your life.”

Amy is referring to technologies like Alexa, Siri, Google, Cortana, and something coming soon called Bixby. The voices of these technologies are, of course, only the window dressing for artificial intelligence. But she astutely points out that,

“…we also know from our existing research that humans have a few bad habits. We continue to encode bias into our algorithms. And we like to talk smack to our machines. These machines are being trained not just to listen to us, but to learn from what we’re telling them.”

Such a merger might just be the mix of any technology (name one) with human nature or the human condition: AI meets Mike who lives across the hall. AI becoming acquainted with Mike may have been inevitable, but the fact that Mike happens to be a jerk was less predictable and so the outcome less so. The most significant disruptions of the future are going to come from the convergence of seemingly unrelated technologies. Sometimes innovation depends on convergence, like building an artificial human that will have to master a lot of different functions. Other times, convergence is accidental or at least unplanned. The engineers over at Boston Dynamics who are building those intimidating walking robots are focused a narrower set of criteria than someone creating an artificial human. Perhaps power and agility are their primary concern. Then, in another lab, there are technologists working on voice stress analysis, and in another setting, researchers are looking to create an AI that can choose your wardrobe. Somewhere else we are working on facial recognition or Augmented Reality or Virtual Reality or bio-engineering, medical procedures, autonomous vehicles or autonomous weapons. So it’s a lot like Harry meets Sally, you’re not sure what you’re going to get or how it’s going to work.

Digital visionary Kevin Kelly thinks that AI will be at the core of the next industrial revolution. Place the prefix “smart” in front of anything, and you have a new application for AI: a smart car, a smart house, a smart pump. These seem like universally useful additions, so far. But now let’s add the same prefix to the jobs you and I do, like a doctor, lawyer, judge, designer, teacher, or policeman. (Here’s a possible use for that ominous walking robot.) And what happens when AI writes better code than coders and decides to rewrite itself?

Hopefully, you’re getting the picture. All of this underscores Amy Webb’s earlier concerns. The ‘journalists, ethicists, economists, philosophers, social scientists’ and designers are rarely in the labs where the future is taking place. Should we be doing something fundamentally differently in our plans for innovative futures?

Side note: Convergence can happen in a lot of ways. The parent corporation of Boston Dynamics is X. I’ll use Wikipedia’s definition of X: “X, an American semi-secret research-and-development facility founded by Google in January 2010 as Google X, operates as a subsidiary of Alphabet Inc.”

Bookmark and Share

Disruption. Part 1

 

We often associate the term disruption with a snag in our phone, internet or other infrastructure service, but there is a larger sense of the expression. Technological disruption refers the to phenomenon that occurs when innovation, “…significantly alters the way that businesses operate. A disruptive technology may force companies to alter the way that they approach their business, risk losing market share or risk becoming irrelevant.”1

Some track the idea as far back as Karl Marx who influenced economist Joseph Schumpeter to coin the term “creative destruction” in 1942.2 Schumpeter described that as the “process of industrial mutation that incessantly revolutionizes the economic structure from within, incessantly destroying the old one, incessantly creating a new one.” But it was, “Clayton M. Christensen, a Harvard Business School professor, that described it’s current framework. “…a disruptive technology is a new emerging technology that unexpectedly displaces an established one.”3

OK, so much for the history lesson. How does this affect us? Historical examples of technological disruption go back to the railroads, and the mass produced automobile, technologies that changed the world. Today we can point to the Internet as possibly this century’s most transformative technology to date. However, we can’t ignore the smartphone, barely ten years old which has brought together a host of converging technologies substantially eliminating the need for the calculator, the dictaphone, land lines, the GPS box that you used to put on your dashboard, still and video cameras, and possibly your privacy. With the proliferation of apps within the smartphone platform, there are hundreds if not thousands of other “services” that now do work that we had previously done by other means. But hold on to your hat. Technological disruption is just getting started. For the next round, we will see an increasingly pervasive Internet of Things (IoT), advanced robotics, exponential growth in Artificial Intelligence (AI) and machine learning, ubiquitous Augmented Reality (AR), Virtual Reality (VR), Blockchain systems, precise genetic engineering, and advanced renewable energy systems. Some of these such as Blockchain Systems will have potentially cataclysmic effects on business. Widespread adoption of blockchain systems that enable digital money would eliminate the need for banks, credit card companies, and currency of all forms. How’s that for disruptive? Other innovations will just continue to transform us and our behaviors. Over the next few weeks, I will discuss some of these potential disruptions and their unique characteristics.

Do you have any you would like to add?

1 http://www.investopedia.com/terms/d/disruptive-technology.asp#ixzz4ZKwSDIbm

2 http://www.investopedia.com/terms/c/creativedestruction.asp

3 http://www.intelligenthq.com/technology/12-disruptive-technologies/

See also: Disruptive technologies: Catching the wave, Journal of Product Innovation Management, Volume 13, Issue 1, 1996, Pages 75-76, ISSN 0737-6782, http://dx.doi.org/10.1016/0737-6782(96)81091-5.
(http://www.sciencedirect.com/science/article/pii/0737678296810915)

Bookmark and Share

Of autonomous machines.

 

Last week we talked about how converging technologies can sometimes yield unpredictable results. One of the most influential players in the development of new technology is DARPA and the defense industry. There is a lot of technological convergence going on in the world of defense. Let’s combine robotics, artificial intelligence, machine learning, bio-engineering, ubiquitous surveillance, social media, and predictive algorithms for starters. All of these technologies are advancing at an exponential pace. It’s difficult to take a snapshot of any one of them at a moment in time and predict where they might be tomorrow. When you start blending them the possibilities become downright chaotic. With each step, it is prudent to ask if there is any meaningful review. What are the ramifications for error as well as success? What are the possibilities for misuse? Who is minding the store? We can hope that there are answers to these questions that go beyond platitudes like, “Don’t stand in the way of progress.”, “Time is of the essence.”, or “We’ll cross that bridge when we come to it.”

No comment.

I bring this up after having seen some unclassified documents on Human Systems, and Autonomous Defense Systems (AKA autonomous weapons). (See a previous blog on this topic.) Links to these documents came from a crowd-funded “investigative journalist” Nafeez Ahmed, publishing on a website called INSURGE intelligence.

One of the documents entitled Human Systems Roadmap is a slide presentation given to the National Defense Industry Association (NDIA) conference last year. The list of agencies involved in that conference and the rest of the documents cited reads like an alphabet soup of military and defense organizations which most of us have never heard of. There are multiple components to the pitch, but one that stands out is “Autonomous Weapons Systems that can take action when needed.” Autonomous weapons are those that are capable of making the kill decision without human intervention. There is also, apparently some focused inquiry into “Social Network Research on New Threats… Text Analytics for Context and Event Prediction…” and “full spectrum social media analysis.” We could get all up in arms about this last feature, but recent incidents in places such as, Benghazi, Egypt, and Turkey had a social networking component that enabled extreme behavior to be quickly mobilized. In most cases, the result was a tragic loss of life. In addition to sharing photos of puppies, social media, it seems, is also good at organizing lynch mobs. We shouldn’t be surprised that governments would want to know how to predict such events in advance. The bigger question is how we should intercede and whether that decision should be made by a human being or a machine.

There are lots of other aspects and lots more documents cited in Ahmed’s lengthy albeit activistic report, but the idea here is that rapidly advancing technology is enabling considerations which were previously held to be science fiction or just impossible. Will we reach the point where these systems are fully operational before we reach the point where we know they are totally safe? It’s a problem when technology grows faster that policy, ethics or meaningful review. And it seems to me that it is always a problem when the race to make something work is more important than the understanding the ramifications if it does.

To be clear, I’m not one of those people who thinks that anything and everything that the military can conceive of is automatically wrong. We will never know how many catastrophes that our national defense services have averted by their vigilance and technological prowess. It should go without saying that the bad guys will get more sophisticated in their methods and tactics, and if we are unable to stay ahead of the game, then we will need to get used to the idea of catastrophe. When push comes to shove, I want the government to be there to protect me. That being said, I’m not convinced that the defense infrastructure (or any part of the tech sector for that matter) is as diligent to anticipate the repercussions of their creations as they are to get them functioning. Only individuals can insist on meaningful review.

Thoughts?

 

Bookmark and Share

Paying attention.

I want to make a Tshirt. On the front, it will say, “7 years is a long time.” On the back, it will say, “Pay attention!”

What am I talking about? I’ll start with some background. This semester, I am teaching a collaborative studio with designers from visual communications, interior design, and industrial design. Our topic is Humane Technologies, and we are examining the effects of an Augmented Reality (AR) system that could be ubiquitous in 7 years. The process began with an immersive scan of the available information and emerging advances in AR, VR, IoT, human augmentation (HA) and, of course, AI. In my opinion, these are a just a few of the most transformative technologies currently attracting the heaviest investment across the globe. And where the money goes there goes the most rapid advancement.

A conversation starter.

One of the biggest challenges for the collaborative studio class (myself included) is to think seven years out. Although we read Kurzweil’s Law of Accelerating Returns, our natural tendency is to think linearly, not exponentially. One of my favorite Kurzweil illustrations is this:

“Exponentials are quite seductive because they start out sub-linear. We sequenced one ten-thousandth of the human genome in 1990 and two ten-thousandths in 1991. Halfway through the genome project, 7 ½ years into it, we had sequenced 1 percent. People said, “This is a failure. Seven years, 1 percent. It’s going to take 700 years, just like we said.” Seven years later it was done, because 1 percent is only seven doublings from 100 percent — and it had been doubling every year. We don’t think in these exponential terms. And that exponential growth has continued since the end of the genome project. These technologies are now thousands of times more powerful than they were 13 years ago, when the genome project was completed.”1

So when I hear a policymaker, say, “We’re a long way from that,” I cringe. We’re not a long way away from that. The iPhone was introduced on June 29, 2007, not quite ten years ago. The ripple-effects from that little technological marvel are hard to catalog. With the smartphone, we have transformed everything from social and behavioral issues to privacy and safety. As my students examine the next possible phase of our thirst for the latest and greatest, AR (and it’s potential for smartphone-like ubiquity), I want them to ask the questions that relate to supporting systems, along with the social and ethical repercussions of these transformations. At the end of it all, I hope that they will walk away with an appreciation for paying attention to what we make and why. For example, why would we make a machine that would take away our job? Why would we build a superintelligence? More often than not, I fear the answer is because we can.

Our focus on the technologies mentioned above is just a start. There are more than these, and we shouldn’t forget things like precise genetic engineering techniques such as CRISPR/Cas9 Gene Editing, neuromorphic technologies such as microprocessors configured like brains, the digital genome that could be the key to disease eradication, machine learning, and robotics.

Though they may sound innocuous by themselves, they each have gigantic implications for disruptions to society. The wild card in all of these is how they converge with each other and the results that no one anticipated. One such mutation would be when autonomous weapons systems (AI + robotics + machine learning) converge with an aggregation of social media activity to predict, isolate and eliminate a viral uprising.

From recent articles and research by the Department of Defense, this is no longer theoretical; we are actively pursuing it. I’ll talk more about that next week. Until then, pay attention.

 

1. http://www.bizjournals.com/sanjose/news/2016/09/06/exclusivegoogle-singularity-visionary-ray.htm
Bookmark and Share

Now I know that Kurzweil is right.

 

In a previous blog entitled “Why Kurzweil is probably right,” I made this statement,

“Convergence is the way technology leaps forward. Supporting technologies enable formerly impossible things to become suddenly possible.”

That blog was talking about how we are developing AI systems at a rapid pace. I quoted a WIRED magazine article by David Pierce that was previewing consumer AIs already in the marketplace and some of the advancements on the way. Pierce said that a personal agent is,

“…only fully useful when it’s everywhere when it can get to know you in multiple contexts—learning your habits, your likes and dislikes, your routine and schedule. The way to get there is to have your AI colonize as many apps and devices as possible.”

Then, I made my usual cautionary comment about how such technologies will change us. And they will. So, if you follow this blog, you know that I throw cold water onto technological promises as a matter of course. I do this because I believe that someone has to.

Right now I’m preparing my collaborative design studio course. We’re going to be focusing on AR and VR, but since convergence is an undeniable influence on our techno-social future, we will have to keep AI, human augmentation, the Internet of Things, and a host of other emerging technologies on the desktop as well. In researching the background for this class, I read three articles from Peter Diamandis for the Singularity Hub website. I’ve written about Peter before, as well. He’s brilliant. He’s also a cheerleader for the Singularity. So that being said, these articles, one on the Internet of Everything (IoE/IoT), Artificial Intelligence (AI), and another on Augmented and Virtual Reality (AR/VR), are full of promises. Most of what we thought of as science fiction, even a couple of years ago are now happening with such speed that Diamandis and his cohorts believe they are imminent in only three years. And by that I mean commonplace.

If that isn’t enough for us to sit up and take notice, then I am reminded of an article from the Silicon Valley Business Journal, another interview with Ray Kurzweil. Kurzweil, of course, has pretty much convinced us all by now that the Law of Accelerating Returns is no longer hyperbole. If anyone thought that it was only hype, sheer observation should have brought them to their senses. In this article,
Kurzweil gives this excellent illustration of how exponential growth actually plays out—no longer as a theory but—as demonstrable practice.

“Exponentials are quite seductive because they start out sub-linear. We sequenced one ten-thousandth of the human genome in 1990 and two ten-thousandths in 1991. Halfway through the genome project, 7 ½ years into it, we had sequenced 1 percent. People said, “This is a failure. Seven years, 1 percent. It’s going to take 700 years, just like we said.” Seven years later it was done because 1 percent is only seven doublings from 100 percent — and it had been doubling every year. We don’t think in these exponential terms. And that exponential growth has continued since the end of the genome project. These technologies are now thousands of times more powerful than they were 13 years ago when the genome project was completed.”

When you combine that with the nearly exponential chaos of hundreds of other converging technologies, indeed the changes to our world and behavior are coming at us like a bullet-train. Ask any Indy car driver, when things are happening that fast, you have to be paying attention.
But when the input is like a firehose and the motivations are unknown, how on earth do we do that?

Personally, I see this as a calling for design thinkers worldwide. Those in the profession, schooled in the ways of design thinking have been espousing our essential worth to realm of wicked problems for some time now. Well, problems don’t get more wicked than this.

Maybe we can design an AI that could keep us from doing stupid things with technologies that we can make but cannot yet comprehend the impact of.

Bookmark and Share

Big-Data Algorithms. Don’t worry. Be happy.

 

It’s easier for us to let the data decide for us. At least that is the idea behind global digital design agency Huge. Aaron Shapiro is the CEO. He says, “The next big breakthrough in design and technology will be the creation of products, services, and experiences that eliminate the needless choices from our lives and make ones on our behalf, freeing us up for the ones we really care about: Anticipatory design.”

Buckminster Fuller wrote about Anticipatory Design Science, but this is not that. Trust me. Shapiro’s version is about allowing big data, by way of artificial intelligence and neural networks, to become so familiar with us and our preferences that it anticipates what we need to do next. In this vision, I don’t have to decide what to wear, or eat, or how to get to work, or when to buy groceries, or gasoline, what color trousers go with my shoes and also when it’s time to buy new shoes. No decisions will be necessary. Interestingly, Shapiro sees this as a good thing. The idea comes from a flurry of activity about something called decision fatigue. What is that? In a nutshell, it says that our decision-making capacity is a reservoir that gradually gets depleted the more decisions we make, possibly as a result of body chemistry. After a long string of decisions, according to the theory, we are more likely to make a bad decision or none at all. Things like willpower disintegrate along with our decision-making.

Among the many articles in the last few months on this topic was FastCompany, who wrote that,

“Anticipatory design is fundamentally different: decisions are made and executed on behalf of the user. The goal is not to help the user make a decision, but to create an ecosystem where a decision is never made—it happens automatically and without user input. The design goal becomes one where we eliminate as many steps as possible and find ways to use data, prior behaviors and business logic to have things happen automatically, or as close to automatic as we can get.”

Supposedly this frees “us up for the ones we really care about.”
My questions are, who decides which questions are important? And once we are freed from making decisions, will we even know that we have missed on that we really care about?

Google Now is a digital assistant that not only responds to a user’s requests and questions, but predicts wants and needs based on search history. Pulling flight information from emails, meeting times from calendars and providing recommendations of where to eat and what to do based on past preferences and current location, the user simply has to open the app for their information to compile.”

It’s easy to forget that AI as we currently know it goes under the name of Facebook or Google or Apple or Amazon. We tend to think of AI as some ghostly future figure or a bank of servers, or an autonomous robot. It reminds me a bit of my previous post about Nick Bostrom and the development of SuperIntelligence. Perhaps it is a bit like an episode of Person of Interest. As we think about designing systems that think for us and decide what is best for us, it might be a good idea to think about what it might be like to no longer think—as long as we still can.

 

 

Bookmark and Share

Election lessons. Beware who you ignore.

It was election week here in America, but unless you’ve been living under a rock for the last eight months, you already know that. Not unlike the Brexit vote from earlier this year, a lot of people were genuinely surprised by the outcome. Perhaps most surprising to me is that the people who seem to be the most surprised are the people who claimed to know—for certain—that the outcome would be otherwise. Why do you suppose that is? There is a lot of finger-pointing and head-scratching going on but from what I’ve seen so far none of these so-called experts has a clue why they were wrong.

Most of them are blaming polls for their miscalculations. And it’s starting to look like their error came not in who they polled but who they thought irrelevant and ignored. Many in the media are in denial that their efforts to shape the election may have even fueled the fire for the underdog. What has become of American Journalism is shameful. Wikileaks proves that ninety percent of the media was kissing up to the left, with pre-approved interviews, stories and marching orders to “shape the narrative.” I don’t care who you were voting for, that kind of collusion is a disgrace for democracy. Call it Pravda. But I don’t want to turn this blog into a political commentary, but it was amusing to watch them all wearing the stupid hat on Wednesday morning. What I do want to talk about, however, is how we look at data to reach a conclusion.

In a morning-after article from the LinkedIn network, futurist Peter Diamandis posted the topic, “Here’s what election campaign marketing will look like in 2020.” It was less about the election and more about future tech with an occasional reference to the election and campaign processes. He has five predictions. First is, the news flash that “Social media will have continued to explode. [and that] The single most important factor influencing your voting decision is your social network.” Diamandis says that “162 million people log onto Facebook at least once a month.” I agree with the first part of his statement but what about the people the other 50% and those that don’t share their opinions on politics. A lot of pollsters are looking at the huge disparity in projections vs. actuals in the 2016 election. They are acknowledging that a lot of people simply weren’t forthcoming in pre-election polling. Those planning to vote Trump, for example, knew that Trump was a polarizing figure and they weren’t going to get into it with their friends on social media or even a stranger taking a poll. Then, I’m willing to bet that a lot of voters who put the election over the top are in the fifty percent that isn’t on social media. Just look at the demographics for social media.

Peter Diamandis is a brilliant guy, and I’m not here to pick on him. Many of his predictions are quite conceivable. Mostly he’s talking about an increase in data mining, and AI is getting better at learning from it, with a laser focus on the individual. If you add this together with programmable avatars, facial recognition improvements and the Internet of Things, the future means that we are all going to be tracked with increasing levels of detail. And though our face is probably not something we can keep secret, if it all creeps you out, remember that much of this is based on what we choose to share. Fortunately, it will take a little bit longer than 2020 for all of these new technologies to read our minds—so until then we still hold the cards. As long as you don’t share our most private thoughts on social media or with pollsters, you’ll keep them guessing.

Bookmark and Share

Yes, you too can be replaced.

Over the past weeks, I have begun to look at the design profession and design education in new ways. It is hard to argue with the idea that all design is future-based. Everything we design is destined for some point beyond now where the thing or space, the communication, or the service will exist. If it already existed, we wouldn’t need to design it. So design is all about the future. For most of the 20th century and the last 16 years, the lion’s share of our work as designers has focused primarily on very near-term, very narrow solutions: A better tool, a more efficient space, a more useful user interface or satisfying experience. In fact, the tighter the constraints, the narrower the problem statement and greater the opportunity to apply design thinking to resolve it in an elegant and hopefully aesthetically or emotionally pleasing way. Such challenges are especially gratifying for the seasoned professional as they have developed almost an intuitive eye toward framing these dilemmas from which novel and efficient solutions result. Hence, over the course of years or even decades, the designer amasses a sort of micro scale, big data assemblage of prior experiences that help him or her reframe problems and construct—alone or with a team—satisfactory methodologies and practices to solve them.

Coincidentally, this process of gaining experience is exactly the idea behind machine learning and artificial intelligence. But, since computers can amass knowledge from analyzing millions of experiences and judgments it is theoretically possible that an artificial intelligence could gain this “intuitive eye” to a degree far surpassing the capacity of an individual him-or-her designer.

That is the idea behind a brash (and annoyingly self-conscious) article from the American Institute of Graphic Arts (AIGA) entitled Automation Threatens To Make Graphic Designers Obsolete. Titles like this are a hook. Of course. Designers, deep down assume that they can never be replaced. They believe this because inherent to the core of artificial intelligence there is a lack of understanding, empathy or emotional verve, so far. We saw this earlier in 2016 when an AI chatbot went Nazi because a bunch of social media hooligans realized that Tay (the name of the Microsoft chatbot) was in learn mode. If you told “her” Nazi’s were cool, she believed you. It was proof, again, that junk in is junk out.

The AIGA author Rob Peart pointed to AutoDesk’s Dreamcatcher software that is capable of rapid prototyping surprisingly creative albeit roughly detailed prototypes. Peart features a quote from an Executive Creative Director for techno-ad-agency Sapient Nitro. “A designer’s role will evolve to that of directing, selecting, and fine tuning, rather than making. The craft will be in having vision and skill in selecting initial machine-made concepts and pushing them further, rather than making from scratch. Designers will become conductors, rather than musicians.”

I like the way we always position new technology in the best possible light. “You’re not going to lose your job. Your job is just going to change.” But tell that to the people who used to write commercial music, for example. The Internet has become a vast clearing house for every possible genre of music. It’s all available for a pittance of what it would have taken a musician to write, arrange and produce a custom piece of music. It’s called stock. There are stock photographs, stock logos, stock book templates, stock music, stock house plans, and the list goes on. All of these have caused a significant disruption to old methods of commerce, and some would say that these stock versions of everything lack the kind of polish and ingenuity that used to distinguish artistic endeavors. The artist’s who’s jobs they have obliterated refer to the work with a four-letter word.

Now, I confess I have used stock photography, and stock music, but I have also used a lot of custom photography and custom music as well. Still, I can’t imagine crossing the line to a stock logo or stock publication design. Perish the thought! Why? Because they look like four-letter-words; homogenized, templates, and the world does not need more blah. It’s likely that we also introduced these new forms of stock commerce in the best possible light, as great democratizing innovations that would enable everyone to afford music, or art or design. That anyone can make, create or borrow the things that professionals used to do.

As artificial intelligence becomes better at composing music, writing blogs and creating iterative designs (which it already does and will continue to improve), we should perhaps prepare for the day when we are no longer musicians or composers but rather simply listeners and watchers.

But let’s put that in the best possible light: Think of how much time we’ll have to think deep thoughts.

 

Bookmark and Share