Category Archives: Design topics

Commentary on the world of design. Broad topic.

On utopia and dystopia. Part 2.

From now on we paint only pretty pictures. Get it?

A couple of blarticles (blog-like articles) caught my eye this week. Interestingly, the two blarticles reference the same work. There was a big brew-haha a couple of years ago about how dystopian science fiction and design fiction with dystopian themes were somehow bad for us and that people were getting sick of it. Based on the most recent lists of bestselling books and films, that no longer seems to be the case. Nevertheless, some science fiction writers like Cory Doctorow (a fine author and Hugo winner) think that possibly more utopian futures would be better at influencing public policy. As he wrote in Boing Boing earlier this month,

“Science fiction writers have a long history of intervening/meddling in policy, but historically this has been in the form of right-wing science fiction writers…”

Frankly, I have no idea what this has to do with politics as there must certainly be more left handed authors and filmmakers in Hollywood than their right-sided counterparts. He continues:

“But a new, progressive wing of design fiction practicioners [sic] are increasingly involved in policy questions…”

Doctorow’s article cites a long piece for Slate, by the New America Foundation’s Kevin Bankston. Bankston says,

“…a stellar selection of 64 bestselling sci-fi writers and visionary filmmakers, has tasked itself with imagining realistic, possible, positive futures that we might actually want to live in—and figuring out we can get from here to there.”

That’s great, because, as I said, I am all about making alternative futures legible for people to consider and contemplate. In the process, however, I don’t think we should give dystopia short shrift. The problem with utopias is that they tend to be prescriptive, in other words, ”This is a better future because I say so.”

The futures I conjure up are neither utopian nor dystopian, but I do try to surface real concerns so that people can decide for themselves, kind of like a democracy. History has proven that regardless of our utopian ideals we more often than not mess things up. I don’t want it to be progressive, liberal, conservative or right wing, and I don’t think it should be the objective of science fiction or entertainment to help shape these policies especially when there is an obvious political purpose. It’s one thing to make alternative futures legible, another to shove them at us.

As long as it’s fiction and entertaining utopias are great but let’s not kid ourselves. Utopia and to some extent dystopia are individual perspectives. Frankly, I don’t want someone telling me that one future is better for me than another. In fact, that almost borders on dystopia in my thinking.

I’m not sure whether Bruce Sterling was answering Cory Doctorow’s piece, but Sterling’s stance on the issue is sharper and more insightful. Sterling is acutely aware that today is the focus. We look at futures, and we realize there are steps we need to take today to make tomorrow better. I recommend his post. Here are a couple of choice clips:

“*The “better future” thing is jam-tomorrow and jam-yesterday talk, so it tends to become the enemy of jam today. You’re better off reading history, and realizing that public aspirations that do seem great, and that even meet with tremendous innovative success, can change the tenor of society and easily become curses a generation later. Not because they were ever bad ideas or bad things to aspire to or do, but because that’s the nature of historical causality. Tomorrow composts today.”

“*If you like doing incredible things, because you’re of a science fictional temperament, then you should frankly admit your fondness for the way-out and the wondrous, and not disingenuously pretend that it’s somehow bound to improve the lot of the mundanes.”

Prettier pictures are not going to save us. Most of the world needs a wake-up call, not another dream.

In my humble opinion.


How science fiction writers’ “design fiction” is playing a greater role in policy debates

Various sci-fi projects allegedly creating a better future

Bookmark and Share

On utopia and dystopia. Part 1.

A couple of interesting articles cropped up in that past week or so coming out of the WIRED Business Conference. The first was an interview with Jennifer Doudna, a pioneer of Crispr/Cas9 the gene editing technique that makes editing DNA nearly as simple as splicing a movie together. That is if you’re a geneticist. According to the interview, most of this technology is at use in crop design, for things like longer lasting potatoes or wheat that doesn’t mildew. But Doudna knows that this is a potential Pandora’s Box.

“In 2015, Doudna was part of a broad coalition of leading biologists who agreed to a worldwide moratorium on gene editing to the “germ line,” which is to say, edits that get passed along to subsequent generations. But it’s legally non-binding, and scientists in China have already begun experiments that involve editing the genome of human embryos.”

Crispr May Cure All Genetic Disease—One Day

Super-babies are just one of the potential ways to misuse Crispr. I blogged a longer and more diabolical list a couple of years ago.

Meddling with the primal forces of nature.

In Doudna’s recent interview, though she focused on the more positive effects on farming, things like rice and tomatoes.

You may not immediately see the connection, but there was a related story from the same conference where WIRED interviewed Jonathan Nolan and Lisa Joy co-creators of the HBO series Westworld. If you haven’t seen Westworld, I recommend it if only for Anthony Hopkins’ performance. As far as I’m concerned Anthony Hopkins could read the phone book, and I would be spellbound.

At any rate, the article quotes:

“The first season of Westworld wasted no time in going from “hey cool, robots!” to “well, that was bleak.” Death, destruction, android torture—it’s all been there from the pilot onward.”

Which pretty much sums it up. According to Nolan,
“We’re inventing cautionary tales for ourselves…”

“And Joy sees Westworld, and sci-fi in general, as an opportunity to talk about what humanity could or should do if things start to go wrong, especially now that advancements in artificial intelligence technologies are making things like androids seem far more plausible than before. “We’re leaping into the age of the unfathomable, the time when machines [can do things we can’t],”

Joy said.

Westworld’s Creators Know Why Sci-Fi Is So Dystopian

To me, this sounds familiar. It is the essence of my particular brand of design fiction. I don’t always set out to make it dystopian but if we look at the way things seem to naturally evolve, virtually every technology once envisioned as a benefit to humankind ends up with someone misusing it. To look at any potentially transformative tech and not ask, “Transform into what?” is simply irresponsible. We love to sell our ideas on their promise of curing disease, saving lives, and ending suffering, but the technologies that we are designing today have epic downsides that many technologists do not even understand. Misuse happens so often that I’ve begun to see us a reckless if we don’t anticipate these repercussions in advance. It’s the subject of a new paper that I’m working toward.

In the meantime, it’s important that we pay attention and demand that others do, too.

There’s more from the science fiction world on utopias vs. dystopias, and I’ll cover that next week.



Bookmark and Share

An AI as President?


Back on May 19th, before I went on holiday, I promised to comment on an article that appeared that week advocating that we would better off with artificial intelligence (AI) as President of the United States. Joshua Davis authored the piece: Hear me out: Let’s Elect
An AI As President, for the business section of WIRED  online. Let’s start out with a few quotes.

“An artificially intelligent president could be trained to
maximize happiness for the most people without infringing on civil liberties.”

“Within a decade, tens of thousands of people will entrust their daily commute—and their safety—to an algorithm, and they’ll do it happily…The increase in human productivity and happiness will be enormous.”

Let’s start with the word happiness. What is that anyway? I’ve seen it around in several discourses about the future, that somehow we have to start focusing on human happiness above all things, but what makes me happy and what makes you happy may very well be different things. Then there is the frightening idea that it is the job of government to make us happy! There are a lot of folks out there that the government should give us a guaranteed income, pay for our healthcare, and now, apparently, it should also make us happy. If you haven’t noticed from my previous blogs, I am not a progressive. If you believe that government should undertake the happy challenge, you had better hope that their idea of happiness coincides with your own. Gerd Leonhard, a futurist whose work I respect, says that there are two types of happiness: first is hedonic (pleasure) which tends to be temporary, and the other is a eudaimonic happiness which he defines as human flourishing.1 I prefer the latter as it is likely to be more meaningful. Meaning is rather crucial to well-being and purpose in life. I believe that we should be responsible for our happiness. God help us if we leave it up to a machine.

This brings me to my next issue with this insane idea. Davis suggests that by simply not driving, there will be an enormous increase in human productivity and happiness. According to the website overflow data,

“Of the 139,786,639 working individuals in the US, 7,000,722, or about 5.01%, use public transit to get to work according to the 2013 American Communities Survey.”

Are those 7 million working individuals who don’t drive happier and more productive? The survey should have asked, but I’m betting the answer is no. Davis also assumes that everyone will be able to afford an autonomous vehicle. Maybe providing every American with an autonomous vehicle is also the job of the government.

Where I agree with Davis is that we will probably abdicate our daily commute to an algorithm and do it happily. Maybe this is the most disturbing part of his argument. As I am fond of saying, we are sponges for technology, and we often adopt new technology without so much as a thought toward the broader ramifications of what it means to our humanity.

There are sober people out there advocating that we must start to abdicate our decision-making to algorithms because we have too many decisions to make. They are concerned that the current state of affairs is simply too painful for humankind. If you dig into the rationale that these experts are using, many of them are motivated by commerce. Already Google and Facebook and the algorithms of a dozen different apps are telling you what you should buy, where you should eat, who you should “friend” and, in some cases, what you should think. They give you news (real or fake), and they tell you this is what will make you happy. Is it working? Agendas are everywhere, but very few of them have you in the center.

As part of his rationale, Davis cites the proven ability for AI to beat the world’s Go champions over and over and over again, and that it can find melanomas better than board-certified dermatologists.

“It won’t be long before an AI is sophisticated enough to
implement a core set of beliefs in ways that reflect changes in the world. In other words, the time is coming when AIs will have better judgment than most politicians.”

That seems like grounds to elect one as President, right? In fact, it is just another way for us to take our eye off the ball, to subordinate our autonomy to more powerful forces in the belief that technology will save us and make us happier.

Back to my previous point, that’s what is so frightening. It is precisely the kind of argument that people buy into. What if the new AI President decides that we will all be happier if we’re sedated, and then using executive powers makes it law? Forget checks and balances, since who else in government could win an argument against an all-knowing AI? How much power will the new AI President give to other algorithms, bots, and machines?

If we are willing to give up the process of purposeful work to make a living wage in exchange for a guaranteed income, to subordinate our decision-making to have “less to think about,” to abandon reality for a “good enough” simulation, and believe that this new AI will be free of the special interests who think they control it, then get ready for the future.

1. Leonhard, Gerd. Technology vs. Humanity: The Coming Clash between Man and Machine. p112, United Kingdom: Fast Future, 2016. Print.

Bookmark and Share

Power sharing?

Just to keep you up to speed, everything is on schedule or ahead of schedule.

In the race toward a superintelligence or ubiquitous AI. If you read this blog or you are paying attention at any level, then you know the fundamentals of AI. But for those of you who don’t here are the basics. Artificial Intelligence comes from processing and analyzing data. Big data. Then programmers feed a gazillion linked-up computers (CPUs) with algorithms that can sort this data and make predictions. This process is what is at work when the Google search engine makes suggestions concerning what you are about to key into the search field. These are called predictive algorithms. If you want to look at pictures of cats, then someone has to task the CPUs with learning what a cat looks like as opposed to a hamster, then scour the Internet for pictures of cats and deliver them to your search. The process of teaching the machine what a cat looks like is called machine learning. There is also an algorithm that watches your online behavior. That’s why, after checking out sunglasses online, you start to see a plethora of ads for sunglasses on just about every page you visit. Similar algorithms can predict where you will drive to today, and when you are likely to return home. There is AI that knows your exercise habits and a ton of other physiological data about you, especially when you’re sharing your Fitbit or other wearable data with the Cloud. Insurance companies extremely interested in this data, so that it can give discounts to “healthy” people and penalize the not so healthy. Someday they might also monitor other “behaviors” that they deem to be not in your best interests (or theirs). Someday, especially if we have a “single-payer” health care system (aka government healthcare), this data may be required before you are insured. Before we go too far into the dark side (which is vast and deep), AI can also search all the cells in your body and identify which ones are dangerous, and target them for elimination. AI can analyze a whole host of things that humans could overlook. It can put together predictions that could save your life.

Googles chips stacked up and ready to go. Photo from WIRED.

Now, with all that AI background behind us, this past week something called Google I/O went down. WIRED calls it Google’s annual State-of-the-Union address. There, Sundar Pichai unveiled something called TPU 2.0 or Cloud TPU. This is something of a breakthrough, because, in the past, the AI process that I just described, even though lighting fast and almost transparent, required all those CPUs, a ton of space (server farms), and gobs of electricity. Now, Google (and others) are packing this processing into chips. These are proprietary to Google. According to WIRED,

“This new processor is a unique creation designed to both train and execute deep neural networks—machine learning systems behind the rapid evolution of everything from image and speech recognition to automated translation to robotics…

…says Chris Nicholson, the CEO, and founder of a deep learning startup called Skymind. “Google is trying to do something better than Amazon—and I hope it really is better. That will mean the whole market will start moving faster.”

Funny, I was just thinking that the market is not moving fast enough. I can hardly wait until we have a Skymind.

“Along those lines, Google has already said that it will offer free access to researchers willing to share their research with the world at large. That’s good for the world’s AI researchers. And it’s good for Google.”

Is it good for us?

This sets up another discussion (in 3 weeks) about a rather absurd opinion piece in WIRED about why we should have an AI as President. These things start out as absurd, but sometimes don’t stay that way.

Bookmark and Share

Humanity is not always pretty.

The Merriam-Webster online dictionary among several options gives this definition for human: “[…]representative of or susceptible to the sympathies and frailties of human nature human kindness a human weakness.”

Then there is humanity which can either confer either the collective of humans, or “[…]the fact or condition of being human; human nature,” or benevolence as in compassion and understanding. For the latter, it seems that we are the eternal optimists when it comes to describing ourselves. Hence, we often refer to the humanity of man as one of our most redeeming traits. At the same time, if we query human nature we can get, “[…]ordinary human behavior, esp considered as less than perfect.” This is a diplomatic way of acknowledging that flaws are a characteristic of our nature. When we talk about our humanity, we presumptively leave out our propensity for greed, pride, and the other deadly sins. We like to think of ourselves as basically good.

If we are honest with ourselves, however, we know this is not always the case and if we push the issue we would have to acknowledge that this not even the case most of the time. Humanity is primarily driven by the kinds of things we don’t like to see in others but rarely see in ourselves. But this is supposed to be a blog about design and tech, isn’t it? So I should get to the point.

A recent article on the blog site QUARTZ, Sarah Kessler’s article, “Algorithms are failing Facebook. Can humanity save it?” poses an interesting question and one that I’ve raised in the past. We like to think that technology will resolve all of our basic human failings—somehow. Recognizing this, back in 1968 Stewart Brand introduced the first Whole Earth Catalog with,

“We are as gods and might as well get good at it.”

After almost 50 years it seems justified to ask whether we’ve made any improvements whatsoever. The question is pertinent in light of Kessler’s article on the advent of Facebook Live. In this particular FB feature, you stream whatever video you want, and it goes out to the whole world instantly. Of course, we need this, right? And we need this now, right? Of course we do.

Like most of these wiz bang technologies they are designed to attract millennials with, “Wow! Cool.” But it is not a simple task. How would a company like Facebook police the potentially billions of feeds coming into the system? The answer is (as is becoming more the case) AI. Artificial Intelligence. Algorithms will recognize and determine what is and is not acceptable to go streaming out to the world. And apparently, Zuck and company were pretty confident that they could pull this off.

Let’s get this thing online. [Photo:]

Maybe not. Kessler notes that,

“According to a Wall Street Journal tally, more than 50 acts of violence, including murders, suicides, and sexual assault, have been broadcast over Facebook Live since the feature launched 13 months ago.”

Both articles tell how Facebook’s Mark Zuckerberg put a team on “lockdown” to rush the feature to market. What was the hurry, one might ask? And Kessler does ask.

“Let’s make sure there’s a humanitarian angle. Millennials like that.” [Photo:]

After these 13 months of spurious events, the tipping point came with a particularly heinous act that ended up circulating on FB Live for nearly 24 hours. It involved a 20-year-old Thai man named Wuttisan Wongtalay, who filmed himself flinging his 11-month-old daughter off the side of a building with a noose around her neck. Then, off-camera, he killed himself.

“In a status update on his personal Facebook profile, CEO Mark Zuckerberg, himself the father of a young girl, pledged that the company would, among other things, add 3,000 people to the team that reviews Facebook content for violations of the company’s policies.”

Note that the answer is not to remove the feature until things could be sorted out or to admit that the algorithms are not ready for prime time. The somewhat surprising answer is more humans.

Kessler, quoting the Wall Street Journal article states,

“Facebook, in a civic mindset, could have put a plan in place for monitoring Facebook Live for violence, or waited to launch Facebook Live until the company was confident it could quickly respond to abuse. It could have hired the additional 3,000 human content reviewers in advance.

But Facebook ‘didn’t grasp the gravity of the medium,’ an anonymous source familiar with Facebook’s Live’s development told the Wall Street Journal.”

Algorithms are code that helps machines learn. They look at a lot of data, say pictures of guns, and then they learn to identify what a gun is. They are not particularly good at context. They don’t know, for example, whether your video is, “Hey, look at my new gun?” or “Die, scumbag.”

So in addition to algorithms, Zuck has decided that he will put 3,000 humans on the case. Nevertheless, writes Kessler,

“[…]they can’t solve Facebook’s problems on their own. Facebook’s active users comprise about a quarter of the world’s population and outnumber the combined populations of the US and China. Adding another 3,000 workers to the mix to monitor content simply isn’t going to make a meaningful difference. As Zuckerberg put it during a phone call with investors, “No matter how many people we have on the team, we’ll never be able to look at everything.”[Emphasis mine.]

So, I go back to my original question: We need this, right?

There are two things going on here. First is the matter of Facebook not grasping the gravity of the medium (which I see as inexcusable), and the second is how the whole thing came around full circle. Algorithms are supposed to replace humans. Instead we added 3,000 more jobs. Unfortunately, that wasn’t the plan. But it could have been.

Algorithms are undoubtedly here to stay, but not necessarily for every application and humans are still better at interpreting human intent than machines are. All of this underscores my position from previous blogs, that most companies when the issue is whether they get to or stay on top, will not police themselves. They’ll wait until it breaks and then fix it, or try to. The problem is that as algorithms get increasingly more complicated fixing them gets just as tricky.

People are working on this so that designers can see what went wrong, but the technology is not there yet.

And it is not just so that we can determine the difference between porn and breastfeeding. Algorithms are starting to make a lot of high stakes decisions, like autonomous vehicles, autonomous drones, or autonomous (fill in the blank). Until the people who are literally racing each other to be the first step back and ask the tougher questions, these types of unanticipated consequences will be commonplace, especially when the prudent actions like stop and assess are rarely considered. No one wants to stop and assess.

Kessler says it well,

“The combination may be fleeting—the technology will catch up eventually—but it’s also quite fitting given that so many of the problems Facebook is now confronting, from revenge porn to fake news to Facebook Live murders, are themselves the result of humanity mixing with algorithms.” [Emphasis mine.]

We can’t get past that humanity thing.


Bookmark and Share

Are you listening to the Internet of Things? Someone is.

As usual, it is a toss up for what I should write about this week. Is it, WIRED’s article on the artificial womb, FastCo’s article on design thinking, the design fiction world of the movie The Circle, or WIRED’s warning about apps using your phone’s microphone to listen for ultrasonic marketing ‘beacons’ that you can’t hear? Tough call, but I decided on a different WIRED post that talked about the vision of Zuckerberg’s future at F8. Actually, the F8 future is a bit like The Circle anyway so I might be killing two birds with one stone.

At first, I thought the article titled, “Look to Zuck’s F8, Not Trump’s 100 Days, to See the Shape of the Future,” would be just another Trump bashing opportunity, (which I sometimes think WIRED prefers more than writing about tech) but not so. It was about tech, mostly.

The article, written by Zachary Karabell starts out with this quote,

“While the fate of the Trump administration certainly matters, it may shape the world much less decisively in the long-term than the tectonic changes rapidly altering the digital landscape.”

I believe this statement is dead-on, but I would include the entire “technological” landscape. The stage is becoming increasingly “set,” as the article continues,

“At the end of March, both the Senate and the House voted to roll back broadband privacy regulations that had been passed by the Federal Communications Commission in 2016. Those would have required internet service providers to seek customers’ explicit permission before selling or sharing their browsing history.”

Combine that with,

“Facebook[s] vision of 24/7 augmented reality with sensors, camera, and chips embedded in clothing, everyday objects, and eventually the human body…”

and the looming possibility of ending net neutrality, we could be setting ourselves up for the real Circle future.

“A world where data and experiences are concentrated in a handful of companies with what will soon be trillion dollar capitalizations risks being one where freedom gives way to control.”

To add kindling to this thicket, there is the Quantified Self movement (QS). According to their website,

“Our mission is to support new discoveries about ourselves and our communities that are grounded in accurate observation and enlivened by a spirit of friendship.”

Huh? Ok. But they want to do this using “self-tracking tools.” This means sensors. They could be in wearables or implantables or ingestibles. Essentially, they track you. Presumably, this is all so that we become more self-aware, and more knowledgeable about our selves and our behaviors. Health, wellness, anxiety, depression, concentration; the list goes on. Like many emerging movements that are linked to technologies, we open the door through health care or longevity, because it is an easy argument that being healty or fit is better than sick and out of shape. But that is all too simple. QS says that that we gain “self knowledge through numbers,” and in the digital age that means data. In a climate that is increasingly less regulatory about what data can be shared and with whom, this could be the beginings of the perfect storm.

As usual, I hope I’m wrong.




Bookmark and Share

Augmented evidence. It’s a logical trajectory.

A few weeks ago I gushed about how my students killed it at a recent guerrilla future enactment on a ubiquitous Augmented Reality (AR) future. Shortly after that, Mark Zuckerberg announced the Facebook AR platform. The AR uses the camera on your smartphone, and according to a recent WIRED article, transforms your smartphone into an AR engine.

Unfortunately, as we all know, (and so does Zuck), the smartphone isn’t currently much of an engine. AR requires a lot of processing, and so does the AI that allows it to recognize the real world so it can layer additional information on top of it. That’s why Facebook (and others), are building their own neural network chips so that the platform doesn’t have to run to the Cloud to access the processing required for Artificial Intelligence (AI). That will inevitably happen which will make the smartphone experience more seamless, but that’s just part the challenge for Facebook.

If you add to that the idea that we become even more dependent on looking at our phones while we are walking or worse, driving, (think Pokemon GO), then this latest announcement is, at best, foreshadowing.

As the WIRED article continues, tech writer Brian Barrett talked to Blair MacIntyre, from Georgia Tech who says,

“The phone has generally sucked for AR because holding it up and looking through it is tiring, awkward, inconvenient, and socially unacceptable,” says MacIntyre. Adding more of it doesn’t solve those issues. It exacerbates them. (The exception might be the social acceptability part; as MacIntyre notes, selfies were awkward until they weren’t.)”

That last part is an especially interesting point. I’ll have to come back to that in another post.

My students did considerable research on exactly this kind of early infancy that technologies undergo on their road to ubiquity. In another WIRED article, even Zuckerberg admitted,

“We all know where we want this to get eventually,” said Zuckerberg in his keynote. “We want glasses, or eventually contact lenses, that look and feel normal, but that let us overlay all kinds of information and digital objects on top of the real world.”

So there you have it. Glasses are the end game, but as my students agreed, contact lenses not so much. Think about it. If you didn’t have to stick a contact lens in your eyeball, you wouldn’t and the idea that they could become ubiquitous (even if you solved the problem of computing inside a wafer thin lens and the myriad of problems with heat, and in-eye-time), they are much farther away, if ever.

Student design team from Ohio State’s Collaborative Studio.

This is why I find my student’s solution so much more elegant and a far more logical trajectory. According to Barrett,

“The optimistic timeline for that sort of tech, though, stretches out to five or 10 years. In the meantime, then, an imperfect solution takes the stage.”

My students locked it down to seven years.

Finally, Zuckerberg made this statement:

“Augmented reality is going to help us mix the digital and physical in all new ways,” said Zuckerberg at F8. “And that’s going to make our physical reality better.”

Except that Zuck’s version of better and mine or yours may not be the same. Exactly what is wrong with reality anyway?

If you want to see the full-blown presentation of what my students produced, you can view it at

Note: Currently the AugHumana experience is superior on Google Chrome.  If you are a Safari or Firefox purest, you may have to wait for the page to load (up to 2 minutes). We’re working on this. So, just use Chrome this time. We hope to have it fixed soon.

Bookmark and Share

Autonomous Assumptions

I’m writing about a recent post from futurist Amy Webb. Amy is getting very political lately which is a real turn-off for me, but she still has her ear to the rail of the future, so I will try to be more tolerant. Amy carried a paragraph from an article entitled, “If you want to trust a robot, look at how it makes decisions” from The Conversation, an eclectic “academic rigor, journalistic flair” blog site. The author, Michael Fisher, a Professor of Computer Science, at the University of Liverpool, says,

“When we deal with another human, we can’t be sure what they will decide but we make assumptions based on what we think of them. We consider whether that person has lied to us in the past or has a record for making mistakes. But we can’t really be certain about any of our assumptions as the other person could still deceive us.

Our autonomous systems, on the other hand, are essentially controlled by software so if we can isolate the software that makes all the high-level decisions – those decisions that a human would have made – then we can analyse the detailed working of these programs. That’s not something you can or possibly ever could easily do with a human brain.”

Fisher thinks that might make autonomous systems more trustworthy than humans. He says that by software analysis we can be almost certain that the software that controls our systems will never make bad decisions.

There is a caveat.

“The environments in which such systems work are typically both complex and uncertain. So while accidents can still occur, we can at least be sure that the system always tries to avoid them… [and] we might well be able to prove that the robot never intentionally means to cause harm.”

That’s comforting. But OK, computers fly and land airplanes, they make big decisions about air traffic, they are driving cars with people in them, they control much of our power grid, and our missile defense, too. So why should we worry? It is a matter of definitions. We use terms when describing new technologies that clearly have different interpretations. How you define bad decisions? Fisher says,

“We are clearly moving on from technical questions towards philosophical and ethical questions about what behaviour we find acceptable and what ethical behaviour our robots should exhibit.”

If you have programmed an autonomous soldier to kill the enemy, is that ethical? Assuming that the Robocop can differentiate between good guys and bad guys, you have nevertheless opened the door to autonomous destruction. In the case of an autonomous soldier in the hands of a bad actor, you may be the enemy.

My point is this. It’s not necessarily the case that we understand how the software works and that it’s reliable, it may be more about who programmed the bot in the first place. In my graphic novel, The Lightstream Chronicles, there are no bad robots (I call them synths), but occasionally bad people get a hold of the good synths and make them do bad things. They call that twisting. It’s illegal, but of course, that doesn’t stop it. Criminals do it all the time.

You see, even in the future some things never change. In the words of Aldous Huxley,

“Technological progress has merely provided us with more efficient means for going backwards.”


Bookmark and Share

Heady stuff.

Last week we talked about how some researchers and scientists on the cutting edge are devising guidelines to attempt to ensure that potentially transformative technologies (like AI) remain safe and beneficial, rather than becoming a threat to humanity. And then, there were industries (like nanotech) that have already blown past any attempt at a meaningful review and now exist in thousands of consumer products, and nobody knows if their safe and the companies who produce them don’t even have to tell us they are part of the composition.

This week I’m going to talk about why I look askance at transformative technologies. Maybe it is because I am a writer at heart. Fiction, specifically science fiction, has captured my attention since childhood. It is my genre of choice. Now that nearly all of the science-based science fiction is no longer fiction, our tendency is to think that the only thing left to do is react or adapt. I can understand this since you can’t isolate a single technology as a thing, you can’t identify precisely from where it started, or how it morphed into what it is. Technologies converge, and they become systems, and systems are dauntingly complex. As humans, we create things that become systems. Even in non-digital times, the railroad ushered in a vastly complex system so much so that we had to invent other things just to deal with it, like a clock. What good was a train if it wasn’t on time? And what good was your time if it wasn’t the same as my time?

Fast forward. Does the clock have any behavioral effect in your life?

My oft-quoted scholars at ASU, Allenby, and Sarewitz see things like trains as level one technologies. They spawn systems in the level two realm that are often far more intricate than figuring out how to get this train contraption to run on rails across the United States.

So the nature of convergence and the resulting complexity of systems is one reason for my wariness of transformative tech.Especially now, that we are building things and we don’t understand how they work. We are inventing things that don’t need us to teach them, and that means that we can’t be sure what they are learning or how. If we can barely understand the complexity of the system that has grown up around the airline industry (which we at one time inherently grasped), how are we going to understand systems that spring up around these inventions that, at the core, we know what they do, but don’t know how?

The second reason is human nature. Your basic web dictionary defines the sociology of human nature as: “[…]the character of human conduct, generally regarded as produced by living in primary groups.” Appreciating things like love and compassion, music and art, consciousness, thought, languages and memory are characteristics of human nature. So are evil and vice, violence and hatred, the quest for power and greed. The latter have a tendency to undermine our inventions for good. Sometimes they are our downfall.

With history as our teacher, if we go blindly forward paying little attention to reason one, the complexity of systems, or reason two, the potential for bad actors, or both, that does not bode well.

I’ve been rambling a bit, so I have to wrap this up. I’ve taken a long way around to say that if you are among those who look at all this tech, and the unimaginable scope of the systems we have created and that the only thing left to do is react or adapt, that this is not the case.

While I can see the dark cloud behind every silver lining, it enables me to bring an umbrella on occasion.

Paying attention to the seemingly benign and insisting on a meaningful review of that which we don’t fully understand is the first step. It may seem as though it will be easier to adapt, but I don’t think so.

I guess that’s the reason behind this blog, behind my graphic novel, and my ongoing research and activism through design fiction. If you’re not paying attention, then I’ll remind you.

Bookmark and Share

The right thing to do. Remember that idea?

I’ve been detecting some blowback recently regarding all the attention surrounding emerging AI, it’s near-term effect on jobs, and it’s long-term impact on humanity. Having an anticipatory mindset toward artificial intelligence is just the logical thing to do. As I have said before, designing a car without a braking system would be foolish. Anticipating the eventuality that you might need to slow down or stop the car is just good design. Nevertheless, there are a lot of people, important people in positions of power that think this is a lot of hooey. They must think that human ingenuity will address any unforeseen circumstances, that science is always benevolent, that stuff like AI is “a long way off,” that the benefits outweigh the downsides, and that all people are basically good. Disappointed I am that this includes our Treasury Secretary Steve Mnuchin. WIRED carried the story and so did my go-to futurist Amy Webb. In her newsletter Amy states,

“When asked about the future of artificial intelligence, automation and the workforce at an Axios event, this was Mnuchin’s reply: ‘It’s not even on our radar screen,’ he said, adding that significant workforce disruption due to AI is ‘50 to 100’ years away. ‘I’m not worried at all’”

Sigh! I don’t care what side of the aisle you’re on, that’s just plain naive. Turning a blind eye to potentially transformative technologies is also dangerous. Others are skeptical of any regulation (perhaps rightly so) that stifles innovation and progress. But safeguards and guidelines are not that. They are well-considered recommendations that are designed to protect while facilitating research and exploration. On the other side of the coin, they are also not laws, which means that if you don’t want to or don’t care to, you don’t have to follow them.

Nevertheless, I was pleased to see a relatively comprehensive set of AI principles that emerged from the Asilomar Conference that I blogged about a couple of weeks ago. The 2017 Asilomar conference organized by The Future of Life Institute,

“…brought together an amazing group of AI researchers from academia and industry, and thought leaders in economics, law, ethics, and philosophy for five days dedicated to beneficial AI.”

The gathering generated the Asilomar AI Principles, a remarkable first step on the eve of an awesome technological power. None of these people, from the panel I highlighted in the last blog, are anxious for regulation, but at the same time, they are aware of the enormous potential for bad actors to undermine whatever beneficial aspects of the technology might surface. Despite my misgivings, an AGI is inevitable. Someone is going to build it, and someone else will find a way to misuse it.

There are plenty more technologies that pose questions. One is nanotechnology. Unlike AI, Hollywood doesn’t spend much time painting nanotechnological dystopias, perhaps that along with the fact that they’re invisible to the naked eye, lets the little critters slip under the radar. While researching a paper for another purpose, I decided to look into nanotechnology to see what kinds of safeguards and guidelines are in place to deal with that rapidly emerging technology. There are clearly best practices by reputable researchers, scientists, and R&D departments but it was especially disturbing to find out that none of these are mandates. Especially since there are thousands of consumer products that use nanotechnology including food, cosmetics, clothing, electronics, and more. A nanometer is very small. Nanotech concerns itself with creations that exist in the 100nm range and below, roughly 7,500 times smaller than a human hair. In the Moore’s Law race, nanothings are the next frontier in cramming data onto a computer chip, or implanting them into our brains or living cells. However, due to their size, nanoparticles can also be inhaled, absorbed into the skin, flushed into the water supply and leeched into the soil. We don’t know what happens if we aggregate a large number of nanoparticles or differing combinations of nanoparticles in our body. We don’t even know how to test for it. And, get ready. Currently, there are no regulations. That means manufacturers do not need to disclose it, and there are no laws to protect the people who work with it. Herein, we have a classic example of bad decisions in the present that make for worse futures. Imagine the opposite: Anticipation of what could go wrong and sound industry intervention at a scale that pre-empts government intervention or the dystopian scenarios that the naysayers claim are impossible.

Bookmark and Share