Tag Archives: futurist

The algorithms.

 

I am not a mathematician. Not even close. My son is a bit of a wiz when it comes to math but not the kind of math you do in your head. His particular mathematical gift only works when he sees the equations. Still, I’d take that. Calculators give me fits. So the idea that I might decipher or write a functioning algorithm (the kind a computer could use) is tantamount to me turning water into wine.

Algorithms are all the buzz these days because they are the functioning math behind artificial intelligence (AI). How is this? I will turn to Merriam-Webster online.

“: a procedure for solving a mathematical problem (as of finding the greatest common divisor) in a finite number of steps that frequently involves repetition of an operation; broadly: a step-by-step procedure for solving a problem or accomplishing some end especially by a computer a search algorithm.”

I’ll throw away the first part of that definition because I don’t understand it. The second part is more my speed: a step-by-step procedure for solving a problem. I get that. As a designer, I do that all the time. Visiting the HowStuffWorks website is even better for explaining the purpose of algorithms. Essentially, it is a way for a computer to do something. Of course, there are, as in most problems, more than one way to get from point A to point B, so computer programmers choose the best algorithm for the task.

What does an algorithm look like? Think of a flow chart or a decision tree. When you turn that into code (the language of computers) then it might look like the image below.

Turning an algorithm into code.

You may already know all this, but I didn’t. Not really. I use the term algorithm all the time to describe the technology and process behind AI, but it always helps me to break these ideas down to their parts.

With all that out of the way, this week on the Futurism.com website, there was an article that discussed Ray Kurzweil’s theory that our brains contain a master algorithm inside our neocortex. It is that algorithm that enables us to handle pattern recognition and all the vastly complex nuance that our brains process every day. Referencing Kurzweil, Futurism stated that,

“… the brain’s neocortex — that part of the brain that’s responsible for intelligent behavior — consists of roughly 300 million modules that recognize patterns. These modules are self-organized into hierarchies that turn simple patterns into complex concepts. Despite neuroscience advancing by leaps and bounds over the years, we still haven’t quite figured out how the neocortex works.”

But, according to Kurzweil, “these multiple modules ‘all have the same algorithm,’”

Presumably, when we figure that out, we will be able to create an AI that thinks like a human, or better than a human. Hold that thought.

On another part of the web was a story from FastCoDesign that asked the question, “What’s The Next Great Art Movement? Ask This Neural Network.” FastCo interviewed Ahmed Elgammal a researcher at Rutgers University who it is getting AI (using algorithms) to create masterpieces after studying all the major art movements through history and how they evolve. His objective is to have the AI come up with the next major art movement. The art is, well, not good art. How do I know? I create art, I’ve studied art, and I’ve even sold art, so I know more about art than I do, say math. The art that Elgammal’s AI generates is intriguing, but it lacks that certain something that tells you it’s art. I think it might be a human thing. It is still something you can recognize.

So if you are still holding on to that earlier thought about algorithms and how we are working to perfect them, we could make the leap that a better functioning AI might fool us at some point and we wouldn’t be able to tell human art from the AI variety. There are a lot of people working on these types of things, and there are billions of dollars going toward the research.

Now I’m going to ask a stupid question. Why do we need an AI to tell us what the next movement in art is or should be? Are humans defective in this area? Couldn’t we just wait and see or are we just too impatient? Perhaps we have grown tired of creating art. If you know, please share.

Not to take anything away from Ray Kurzweil, but I guess I could ask the same question of AI. I assume that we could use AI that is so far above our thinking that it can help us solve problems better than we could on our own. But, if that AI is thinking so far beyond us, I’m not sure whether it would help us create better solutions or whether we would simply abdicate thinking to the AI. There’s a real danger of that you know. Maybe thinking is overrated.

The question keeps coming up. Do we make things to help us flourish or do we make things because we can?

Ray Kurzweil: There’s a Blueprint for the Master Algorithm in Our Brains

 

Bookmark and Share

Autonomous Assumptions

I’m writing about a recent post from futurist Amy Webb. Amy is getting very political lately which is a real turn-off for me, but she still has her ear to the rail of the future, so I will try to be more tolerant. Amy carried a paragraph from an article entitled, “If you want to trust a robot, look at how it makes decisions” from The Conversation, an eclectic “academic rigor, journalistic flair” blog site. The author, Michael Fisher, a Professor of Computer Science, at the University of Liverpool, says,

“When we deal with another human, we can’t be sure what they will decide but we make assumptions based on what we think of them. We consider whether that person has lied to us in the past or has a record for making mistakes. But we can’t really be certain about any of our assumptions as the other person could still deceive us.

Our autonomous systems, on the other hand, are essentially controlled by software so if we can isolate the software that makes all the high-level decisions – those decisions that a human would have made – then we can analyse the detailed working of these programs. That’s not something you can or possibly ever could easily do with a human brain.”

Fisher thinks that might make autonomous systems more trustworthy than humans. He says that by software analysis we can be almost certain that the software that controls our systems will never make bad decisions.

There is a caveat.

“The environments in which such systems work are typically both complex and uncertain. So while accidents can still occur, we can at least be sure that the system always tries to avoid them… [and] we might well be able to prove that the robot never intentionally means to cause harm.”

That’s comforting. But OK, computers fly and land airplanes, they make big decisions about air traffic, they are driving cars with people in them, they control much of our power grid, and our missile defense, too. So why should we worry? It is a matter of definitions. We use terms when describing new technologies that clearly have different interpretations. How you define bad decisions? Fisher says,

“We are clearly moving on from technical questions towards philosophical and ethical questions about what behaviour we find acceptable and what ethical behaviour our robots should exhibit.”

If you have programmed an autonomous soldier to kill the enemy, is that ethical? Assuming that the Robocop can differentiate between good guys and bad guys, you have nevertheless opened the door to autonomous destruction. In the case of an autonomous soldier in the hands of a bad actor, you may be the enemy.

My point is this. It’s not necessarily the case that we understand how the software works and that it’s reliable, it may be more about who programmed the bot in the first place. In my graphic novel, The Lightstream Chronicles, there are no bad robots (I call them synths), but occasionally bad people get a hold of the good synths and make them do bad things. They call that twisting. It’s illegal, but of course, that doesn’t stop it. Criminals do it all the time.

You see, even in the future some things never change. In the words of Aldous Huxley,

“Technological progress has merely provided us with more efficient means for going backwards.”

 

Bookmark and Share

Of autonomous machines.

 

Last week we talked about how converging technologies can sometimes yield unpredictable results. One of the most influential players in the development of new technology is DARPA and the defense industry. There is a lot of technological convergence going on in the world of defense. Let’s combine robotics, artificial intelligence, machine learning, bio-engineering, ubiquitous surveillance, social media, and predictive algorithms for starters. All of these technologies are advancing at an exponential pace. It’s difficult to take a snapshot of any one of them at a moment in time and predict where they might be tomorrow. When you start blending them the possibilities become downright chaotic. With each step, it is prudent to ask if there is any meaningful review. What are the ramifications for error as well as success? What are the possibilities for misuse? Who is minding the store? We can hope that there are answers to these questions that go beyond platitudes like, “Don’t stand in the way of progress.”, “Time is of the essence.”, or “We’ll cross that bridge when we come to it.”

No comment.

I bring this up after having seen some unclassified documents on Human Systems, and Autonomous Defense Systems (AKA autonomous weapons). (See a previous blog on this topic.) Links to these documents came from a crowd-funded “investigative journalist” Nafeez Ahmed, publishing on a website called INSURGE intelligence.

One of the documents entitled Human Systems Roadmap is a slide presentation given to the National Defense Industry Association (NDIA) conference last year. The list of agencies involved in that conference and the rest of the documents cited reads like an alphabet soup of military and defense organizations which most of us have never heard of. There are multiple components to the pitch, but one that stands out is “Autonomous Weapons Systems that can take action when needed.” Autonomous weapons are those that are capable of making the kill decision without human intervention. There is also, apparently some focused inquiry into “Social Network Research on New Threats… Text Analytics for Context and Event Prediction…” and “full spectrum social media analysis.” We could get all up in arms about this last feature, but recent incidents in places such as, Benghazi, Egypt, and Turkey had a social networking component that enabled extreme behavior to be quickly mobilized. In most cases, the result was a tragic loss of life. In addition to sharing photos of puppies, social media, it seems, is also good at organizing lynch mobs. We shouldn’t be surprised that governments would want to know how to predict such events in advance. The bigger question is how we should intercede and whether that decision should be made by a human being or a machine.

There are lots of other aspects and lots more documents cited in Ahmed’s lengthy albeit activistic report, but the idea here is that rapidly advancing technology is enabling considerations which were previously held to be science fiction or just impossible. Will we reach the point where these systems are fully operational before we reach the point where we know they are totally safe? It’s a problem when technology grows faster that policy, ethics or meaningful review. And it seems to me that it is always a problem when the race to make something work is more important than the understanding the ramifications if it does.

To be clear, I’m not one of those people who thinks that anything and everything that the military can conceive of is automatically wrong. We will never know how many catastrophes that our national defense services have averted by their vigilance and technological prowess. It should go without saying that the bad guys will get more sophisticated in their methods and tactics, and if we are unable to stay ahead of the game, then we will need to get used to the idea of catastrophe. When push comes to shove, I want the government to be there to protect me. That being said, I’m not convinced that the defense infrastructure (or any part of the tech sector for that matter) is as diligent to anticipate the repercussions of their creations as they are to get them functioning. Only individuals can insist on meaningful review.

Thoughts?

 

Bookmark and Share

Election lessons. Beware who you ignore.

It was election week here in America, but unless you’ve been living under a rock for the last eight months, you already know that. Not unlike the Brexit vote from earlier this year, a lot of people were genuinely surprised by the outcome. Perhaps most surprising to me is that the people who seem to be the most surprised are the people who claimed to know—for certain—that the outcome would be otherwise. Why do you suppose that is? There is a lot of finger-pointing and head-scratching going on but from what I’ve seen so far none of these so-called experts has a clue why they were wrong.

Most of them are blaming polls for their miscalculations. And it’s starting to look like their error came not in who they polled but who they thought irrelevant and ignored. Many in the media are in denial that their efforts to shape the election may have even fueled the fire for the underdog. What has become of American Journalism is shameful. Wikileaks proves that ninety percent of the media was kissing up to the left, with pre-approved interviews, stories and marching orders to “shape the narrative.” I don’t care who you were voting for, that kind of collusion is a disgrace for democracy. Call it Pravda. But I don’t want to turn this blog into a political commentary, but it was amusing to watch them all wearing the stupid hat on Wednesday morning. What I do want to talk about, however, is how we look at data to reach a conclusion.

In a morning-after article from the LinkedIn network, futurist Peter Diamandis posted the topic, “Here’s what election campaign marketing will look like in 2020.” It was less about the election and more about future tech with an occasional reference to the election and campaign processes. He has five predictions. First is, the news flash that “Social media will have continued to explode. [and that] The single most important factor influencing your voting decision is your social network.” Diamandis says that “162 million people log onto Facebook at least once a month.” I agree with the first part of his statement but what about the people the other 50% and those that don’t share their opinions on politics. A lot of pollsters are looking at the huge disparity in projections vs. actuals in the 2016 election. They are acknowledging that a lot of people simply weren’t forthcoming in pre-election polling. Those planning to vote Trump, for example, knew that Trump was a polarizing figure and they weren’t going to get into it with their friends on social media or even a stranger taking a poll. Then, I’m willing to bet that a lot of voters who put the election over the top are in the fifty percent that isn’t on social media. Just look at the demographics for social media.

Peter Diamandis is a brilliant guy, and I’m not here to pick on him. Many of his predictions are quite conceivable. Mostly he’s talking about an increase in data mining, and AI is getting better at learning from it, with a laser focus on the individual. If you add this together with programmable avatars, facial recognition improvements and the Internet of Things, the future means that we are all going to be tracked with increasing levels of detail. And though our face is probably not something we can keep secret, if it all creeps you out, remember that much of this is based on what we choose to share. Fortunately, it will take a little bit longer than 2020 for all of these new technologies to read our minds—so until then we still hold the cards. As long as you don’t share our most private thoughts on social media or with pollsters, you’ll keep them guessing.

Bookmark and Share

Defining [my] design fiction.

 

It’s tough to define something that still so new that, in practice, there is no prescribed method and dozens of interpretations. I met some designers at a recent conference in Trento, Italy that insist they invented the term in 1995, but most authorities attribute the origin to Bruce Sterling in his 2005 book, Shaping Things. The book was not about design fiction per se. Sterling’s is fond of creating neologisms, and this was one of those (like the term ‘spime’) that appeared in that book. It caught on. Sometime later Sterling sought to clarify it. And his most quoted definition is, “The deliberate use of diegetic prototypes to suspend disbelief about change.” If you rattle that off to most people, they look at you glassy-eyed. Fortunately, in 2013, Sterling went into more detail.

“Deliberate use’ means that design fiction is something that people do with a purpose. ‘Diegetic’ is from film and theatre studies. A movie has a story, but it also has all the commentary, scene-setting, props, sets and gizmos to support that story. Design fiction doesn’t tell stories — instead, it designs prototypes that imply a changed world. Suspending disbelief’ means that design fiction has an ethics. Design fictions are fakes of a theatrical sort, but they’re not wicked frauds or hoaxes intended to rob or fool people. A design fiction is a creative act that puts the viewer into a different conceptual space — for a while. Then it lets him go. Design fiction has an audience, not victims. Finally, there’s the part about ‘change’. Awareness of change is what distinguishes design fictions from jokes about technology, such as over-complex Heath Robinson machines or Japanese chindogu (‘weird tool’) objects. Design fiction attacks the status quo and suggests clear ways in which life might become different.” (Sterling, 2013)

The above definition is the one on which I base most of my research. I’ve written on this before, such as what distinguishes it from science fiction, but I bring this up today because I frequently run into things that are not design fiction but are labeled thus. There are three non-negotiables for me. We’re talking about change, a critical eye on change and suspending disbelief.

Change
Part of the intent of design fiction is to get you to think about change. Things are going to change. It implies a future. I suppose it doesn’t mean that the fiction itself has to take place in the future, however, since we can’t go back in time, the only kind of change we’re going to encounter is the future variety. So, if the intent is to make us think, that thinking should have some redeeming benefit on the present to make us better prepared for the future. Such as, “Wow. That future sucks. Let’s not let that happen.” Or, “Careful with that future scenario, it could easily go awry.” Like that.

A critical eye on change.
There are probably a lot of practitioners who would disagree with me on this point. The human race has a proclivity for messing things up. We develop things often in advance of actually thinking about what they might mean for society, or an economy, or our health, our environment, or our behavior. We design way too much stuff just because we can and because it might make us rich if we do. We need to think more before we act. It means we need to take some responsibility for what we design. Looking into the future with a critical eye on how things could go wrong or just on how wrong they might be without us noticing is a crucial element in my interpretation of intent.

Suspending disbelief
As Sterling says, the objective here is not to fool you but to get close enough to a realistic scenario that you accept that it could happen. If it’s off-the-wall, WTF, conceptual art, absent of any plausible existence, or sheer fantasy, it misses the point. I’m sure there’s a place for those and no doubt a purpose, but call it something else, but not design fiction. It’s the same reason that Star Wars is not design fiction. There’s design and there’s fiction but different intent.

I didn’t intend to have this turn into a rant, and this may all seem to you like splitting hairs, but often these subtle differences are important so that we know what were studying and why.

The nice thing about blogs is that if you have a different opinion, you can share.

 

Sterling, B., 2013. Design Fiction: “Patently Untrue” by Bruce Sterling [WWW Document]. WIRED. URL http://www.wired.co.uk/magazine/archive/2013/10/play/patently-untrue (accessed 12.12.14).
Bookmark and Share

Who are you?

 

There have been a few articles in the recent datasphere have centered around the pervasive tracking of our online activity from the benign to those bordering on unethical. One was from FastCompany that highlighted some practices that web marketers use to track the folks that visit their sites. The article by Steve Melendez lists a handful of these. They include the basics like first party cookies, and A/B testing, to more invasive methods such as psychological testing (thanks, Facebook) third-party tracking cookies, and differential pricing. The cookie is, of course, the most basic. I use them on this site and on The Lightstream Chronicles to see if anyone is visiting, where they’re coming from and a bunch of other minutiae. Using Google Analytics, I can, for example, see what city or country my readers are coming from, age and sex, whether they are regulars or new visitors, whether they visit via mobile or desktop, Apple or Windows, and if they came to my site by way of referral, where did they originate. Then I know if my ads for the graphic novel are working. I find this harmless. I have no interest in knowing your sexual preference, where you shop, and above all, I’m not selling anything (at least not yet). I’m just looking for more eyeballs. More viewers mean that I’m not wasting my time and that somebody is paying attention. It’s interesting that a couple of months ago the EU internet authorities sent me a snippet of code that I was “required” to post on the LSC site alerting my visitors that I use cookies. Aside from they U.S., my highest viewership is from the UK. It’s interesting that they are aware that their citizens are visiting. Hmm.

I have software that allows me to A/B test which means I could change up something on the graphic novel homepage and see if it gets more reaction than a previous version. But, I barely have the time to publish a new blog or episode much less create different versions and test them. A one-man-show has its limitations.

The rest of the tracking methods highlighted in the above article require a lot of devious programming. Since I have my hands full with the basics, this stuff is way above my pay grade. Even if it wasn’t, I think it all goes a bit too far.

Personally, I deplore most internet advertising. I know that makes me a hypocrite since I use it from time to time to drive traffic to my site. I also realize that it is probably a necessary evil. Sites need revenue, or they can’t pump out the content on which we have come to rely. Unfortunately, the landscape often turns into a melee. Tumblr is a good example. Initially, they integrated their ads into the format of their posts. So as you are scrolling through the content, you see an ad within their signature brand presentation. Cool. Then they started doing separate in-line ads. These looked entirely different from their brand content, and the ads were those annoying things like “Grandma discovers the fountain of youth.” Not cool. Then they introduced this floating ad box that tracks you all the way down the page as you scroll through content. You get no break from it. It’s distracting, and based on the content, it can be horrifying, like Hillary Clinton staring at you for seven minutes. How much can a person take?

And it won't go away.
And it won’t go away.

Since my blog is future oriented, the question arises, what does this have to do with the future? It does. These marketing techniques will only become more sophisticated. Many of them already incorporate artificial intelligence to map your activity and predict your every want and need—maybe even the ones you didn’t think anyone knew you had. Is this an invasion of privacy? If it is, it’s going to get more invasive. And as I’m fond of saying, we need to pay attention to these technologies and practices, now or we won’t have a say in where they end up. As a society, we have to do better than just adapt to whatever comes along. We need to help point them in the right direction from the beginning.

 

Bookmark and Share

Thought leaders and followers.

 

Next week, the World Future Society is having its annual conference. As a member, I really should be going, but I can’t make it this year. The future is a dicey place. There are people convinced that we can create a utopia, some are warning of dystopia, and the rest are settled somewhere in between. Based on promotional emails that I have received, one of the topics is “The Future of Evolution and Human Nature.” According to the promo,

“The mixed emotions and cognitive dissonance that occur inside each of us also scale upward into our social fabric: implicit bias against new perspectives, disdain for people who represent “other”, the fear of a new world that is not the same as it has always been, and the hopelessness that we cannot solve our problems. We know from experience that this negativity, hatred, fear, and hopelessness is not what it seems like on the surface: it is a reaction to change. And indeed we are experiencing a period of profound change.” There is a larger story of our evolution that extends well beyond the negativity and despair that feels so real to us today. It’s a story of redefining and building infrastructure around trust, hope and empathy. It’s a story of accelerating human imagination and leveraging it to create new and wondrous things.

It is a story of technological magic that will free us from scarcity and ensure a prosperous lifestyle for everyone, regardless of where they come from.”

Woah. I have to admit, this kind of talk that makes me uncomfortable. Are fear of a new world, negativity, hatred, and fear reactions to change? Will technosocial magic solve all our problems? This type of rhetoric sounds more like a movement than a conference that examines differing views on an important topic. It would seem to frame caution as fear and negativity, and then we throw in that hyperbole hatred. Does it sound like the beginning of an agenda with a framework that characterizes those who disagree as haters? I think it does. It’s a popular tactic.

These views do not by any means reflect the opinions of the entire WFS membership, but there is a significant contingent, such as the folks from Humanity+, which hold the belief that we can fix human evolution—even human nature—with technology. For me, this is treading into thorny territory.

What is human nature? Merriam-Webster online provides this definition:

“[…]the nature of humans; especially: the fundamental dispositions and traits of humans.” Presumably, we include good traits and bad traits. Will our discussions center on which features to fix and which to keep or enhance? Who will decide?

What about the human condition? Can we change this? Should we? According to Wikipedia,

“The human condition is “the characteristics, key events, and situations which compose the essentials of human existence, such as birth, growth, emotionality, aspiration, conflict, and mortality.” This is a very broad topic which has been and continues to be pondered and analyzed from many perspectives, including those of religion, philosophy, history, art, literature, anthropology, sociology, psychology, and biology.”

Clearly, there are a lot of different perspectives to be represented here. Do we honestly believe that technology will answer them all sufficiently? The theme of the upcoming WFS conference is “A Brighter Future IS Possible.” No doubt there will be a flurry of technosocial proposals presented there, and we should not put them aside as a bunch of fringe futurists. These voices are thought-leaders. They lead thinking. Are we thinking? Are we paying attention? If so, then it’s time to discuss and debate these issues, or others will decides without us.

Bookmark and Share

Future Shock

 

As you no doubt have heard, Alvin Toffler died on June 27, 2016, at the age of 87. Mr. Toffler was a futurist. The book for which he is best known, Future Shock was a best seller in 1970 and was considered required college reading at the time. In essence, Mr. Toffler said that the future would be a disorienting place if we just let it happen. He said we need to pay attention.

Credit: Susan Wood/Getty Images from The New York Times 2016
Credit: Susan Wood/Getty Images from The New York Times 2016

This week, The New York Times published an article entitled Why We Need to Pick Up Alvin Toffler’s Torch by Farhad Manjoo. As Manjoo observes, at one time (the 1950s, 1960s, and 1970s), the study of foresight and forecasting was important stuff that governments and corporations took seriously. Though I’m not sure I agree with Manjoo’s assessment of why that is no longer the case, I do agree that it is no longer the case.

“In many large ways, it’s almost as if we have collectively stopped planning for the future. Instead, we all just sort of bounce along in the present, caught in the headlights of a tomorrow pushed by a few large corporations and shaped by the inescapable logic of hyper-efficiency — a future heading straight for us. It’s not just future shock; we now have future blindness.”

At one time, this was required reading.
At one time, this was required reading.

When I attended the First International Conference on Anticipation in 2015, I was pleased to discover that the blindness was not everywhere. In fact, many of the people deeply rooted in the latest innovations in science and technology, architecture, social science, medicine, and a hundred other fields are very interested in the future. They see an urgency. But most governments don’t and I fear that most corporations, even the tech giants are more interested in being first with the next zillion-dollar technology than asking if that technology is the right thing to do. Even less they are asking what repercussions might flow from these advancements and what are the ramifications of today’s decision making. We just don’t think that way.

I don’t believe that has to be the case. The World Future Society for example at their upcoming conference in Washington, DC will be addressing the idea of futures studies as a requirement for high school education. They ask,

“Isn’t it surprising that mainstream education offers so little teaching on foresight? Were you exposed to futures thinking when you were in high school or college? Are your children or grandchildren taught how decisions can be made using scenario planning, for example? Or take part in discussions about what alternative futures might look like? In a complex, uncertain world, what more might higher education do to promote a Futurist Mindset?”

It certainly needs to be part of design education, and it is one of the things I vigorously promote at my university.

As Manjoo sums up in his NYT article,

“Of course, the future doesn’t stop coming just because you stop planning for it. Technological change has only sped up since the 1990s. Notwithstanding questions about its impact on the economy, there seems no debate that advances in hardware, software and biomedicine have led to seismic changes in how most of the world lives and works — and will continue to do so.

Yet, without soliciting advice from a class of professionals charged with thinking systematically about the future, we risk rushing into tomorrow headlong, without a plan.”

And if that isn’t just crazy, at the very least it’s dangerous.

 

 

Bookmark and Share

“At a certain point…”

 

A few weeks ago Brian Barrett of WIRED magazine reported on an “NEW SURVEILLANCE SYSTEM MAY LET COPS USE ALL OF THE CAMERAS.” According to the article,

“Computer scientists have created a way of letting law enforcement tap any camera that isn’t password protected so they can determine where to send help or how to respond to a crime.”

Barrett suggests that America has 30 million surveillance cameras out there. The above sentence, for me, is loaded. First of all, as with most technological advancements, they are always couched in the most benevolent form. These scientists are going to help law enforcement send help or respond to crimes. This is also the argument that the FBI used to try to force Apple to provide a backdoor to the iPhone. It was for the common good.

If you are like me, you immediately see a giant red flag waving to warn us of the gaping possibility for abuse. However, we can take heart to some extent. The sentence mentioned above also limits law enforcement access to, “any camera that isn’t password protected.” Now the question is: What percentage of the 30 million cameras are password protected? Does it include, for example, more than kennel cams or random weather cams? Does it include the local ATM, traffic, and other security cameras? The system is called CAM2.

“…CAM2 reveals the location and orientation of public network cameras, like the one outside your apartment.”

It can aggregate the cameras in a given area and allow law enforcement to access them. Hmm.

Last week I teased that some of the developments that I reserved for 25, 50 or even further into the future, through my graphic novel The Lightstream Chronicles, are showing signs of life in the next two or three years. A universal “cam” system like this is one of them; the idea of ubiquitous surveillance or the mesh only gets stronger with more cameras. Hence the idea behind my ubiquitous surveillance blog. If there is a system that can identify all of the “public network” cams, how far are we from identifying all of the “private network” cams? How long before these systems are hacked? Or, in the name of national security, how might these systems be appropriated? You may think this is the stuff of sci-fi, but it is also the stuff of design-fi, and design-fi, as I explained last week, is intended to make us think; about how these things play out.

In closing, WIRED’s Barrett raised the issue of the potential for abusing systems such as CAM2 with Gautam Hans, policy counsel at the Center for Democracy & Technology. And, of course, we got the standard response:

“It’s not the best use of our time to rail against its existence. At a certain point, we need to figure out how to use it effectively, or at least with extensive oversight.”

Unfortunately, history has shown that that certain point usually arrives after something goes egregiously wrong. Then someone asks, “How could something like this happen?”

Bookmark and Share

It’s all happening too fast.

 

Since design fiction is my area of research and focus, I have covered the difference between it and science fiction in previous blogs. But the two are quite closely related. Let me start with science fiction. There are a plethora of definitions for SF. Here are two of my favorites.

The first is from Isaac Asimov:

“[Social] science fiction is that branch of literature which is concerned with the impact of scientific advance on human beings.” — Isaac Asimov, Science Fiction Writers of America Bulletin, 1951 1

The second is from Robert Heinlein:

“…realistic speculation about possible future events, based solidly on adequate knowledge of the real world, past and present, and on a thorough understanding of the scientific method.” 2

I especially like the first because it emphasizes people at the heart of the storytelling. The second definition speaks to real-world knowledge, and understanding of the scientific method. Here, there is a clear distinction between science fiction and fantasy. Star Wars is not science fiction. Even George Lucas admits this. In a conversation at the Sundance Film Festival last year he is quoted as saying, “Star Wars really isn’t a science-fiction film, it’s a fantasy film and a space opera.”3 While Star Wars involves space travel (which is technically science based), the story has no connection to the real world; it may as well be Lord of the Rings.

I bring up these distinctions because design fiction is a hybrid of science fiction, but there is a difference. Sterling defines design fiction as, “The deliberate use of diegetic prototypes to suspend disbelief about change.” Though even Sterling agrees that his definition is “heavy-laden” the operative word in his definition is “deliberate.” In other words, a primary operand of design fiction is the designers intent. There is a purpose for design fiction and it is to provoke discussion about the future. While it may entertain, that is not it’s purpose. It needs to be a provocation. For me, the more provocative, the better. The idea that we would go quietly into whatever future unfolds based upon whatever corporate or scientific manifesto is most profitable or most manageable makes me crazy.

The urgency arises in the fact that the future is moving way to fast. In The Lightstream Chronicles, some of the developments that I reserved for 25, 50 or even further into the future are showing signs of life in the next two or three years. Next week I will introduce you to a couple of these technologies.

 

1. http://io9.com/5622186/how-many-defintions-of-science-fiction-are-there
2. Heinlein, R., 1983. The SF book of lists. In: Jakubowski, M., Edwards, M. (Eds.), The SF Book of Lists. Berkley Books, New York, p. 257.
3. http://www.esquire.com/entertainment/movies/a32507/george-lucas-sundance-quotes/
Bookmark and Share