Tag Archives: WIRED

On utopia and dystopia. Part 2.

From now on we paint only pretty pictures. Get it?

A couple of blarticles (blog-like articles) caught my eye this week. Interestingly, the two blarticles reference the same work. There was a big brew-haha a couple of years ago about how dystopian science fiction and design fiction with dystopian themes were somehow bad for us and that people were getting sick of it. Based on the most recent lists of bestselling books and films, that no longer seems to be the case. Nevertheless, some science fiction writers like Cory Doctorow (a fine author and Hugo winner) think that possibly more utopian futures would be better at influencing public policy. As he wrote in Boing Boing earlier this month,

“Science fiction writers have a long history of intervening/meddling in policy, but historically this has been in the form of right-wing science fiction writers…”

Frankly, I have no idea what this has to do with politics as there must certainly be more left handed authors and filmmakers in Hollywood than their right-sided counterparts. He continues:

“But a new, progressive wing of design fiction practicioners [sic] are increasingly involved in policy questions…”

Doctorow’s article cites a long piece for Slate, by the New America Foundation’s Kevin Bankston. Bankston says,

“…a stellar selection of 64 bestselling sci-fi writers and visionary filmmakers, has tasked itself with imagining realistic, possible, positive futures that we might actually want to live in—and figuring out we can get from here to there.”

That’s great, because, as I said, I am all about making alternative futures legible for people to consider and contemplate. In the process, however, I don’t think we should give dystopia short shrift. The problem with utopias is that they tend to be prescriptive, in other words, ”This is a better future because I say so.”

The futures I conjure up are neither utopian nor dystopian, but I do try to surface real concerns so that people can decide for themselves, kind of like a democracy. History has proven that regardless of our utopian ideals we more often than not mess things up. I don’t want it to be progressive, liberal, conservative or right wing, and I don’t think it should be the objective of science fiction or entertainment to help shape these policies especially when there is an obvious political purpose. It’s one thing to make alternative futures legible, another to shove them at us.

As long as it’s fiction and entertaining utopias are great but let’s not kid ourselves. Utopia and to some extent dystopia are individual perspectives. Frankly, I don’t want someone telling me that one future is better for me than another. In fact, that almost borders on dystopia in my thinking.

I’m not sure whether Bruce Sterling was answering Cory Doctorow’s piece, but Sterling’s stance on the issue is sharper and more insightful. Sterling is acutely aware that today is the focus. We look at futures, and we realize there are steps we need to take today to make tomorrow better. I recommend his post. Here are a couple of choice clips:

“*The “better future” thing is jam-tomorrow and jam-yesterday talk, so it tends to become the enemy of jam today. You’re better off reading history, and realizing that public aspirations that do seem great, and that even meet with tremendous innovative success, can change the tenor of society and easily become curses a generation later. Not because they were ever bad ideas or bad things to aspire to or do, but because that’s the nature of historical causality. Tomorrow composts today.”

“*If you like doing incredible things, because you’re of a science fictional temperament, then you should frankly admit your fondness for the way-out and the wondrous, and not disingenuously pretend that it’s somehow bound to improve the lot of the mundanes.”

Prettier pictures are not going to save us. Most of the world needs a wake-up call, not another dream.

In my humble opinion.

 

How science fiction writers’ “design fiction” is playing a greater role in policy debates

Various sci-fi projects allegedly creating a better future

Bookmark and Share

On utopia and dystopia. Part 1.

A couple of interesting articles cropped up in that past week or so coming out of the WIRED Business Conference. The first was an interview with Jennifer Doudna, a pioneer of Crispr/Cas9 the gene editing technique that makes editing DNA nearly as simple as splicing a movie together. That is if you’re a geneticist. According to the interview, most of this technology is at use in crop design, for things like longer lasting potatoes or wheat that doesn’t mildew. But Doudna knows that this is a potential Pandora’s Box.

“In 2015, Doudna was part of a broad coalition of leading biologists who agreed to a worldwide moratorium on gene editing to the “germ line,” which is to say, edits that get passed along to subsequent generations. But it’s legally non-binding, and scientists in China have already begun experiments that involve editing the genome of human embryos.”

Crispr May Cure All Genetic Disease—One Day

Super-babies are just one of the potential ways to misuse Crispr. I blogged a longer and more diabolical list a couple of years ago.

Meddling with the primal forces of nature.

In Doudna’s recent interview, though she focused on the more positive effects on farming, things like rice and tomatoes.

You may not immediately see the connection, but there was a related story from the same conference where WIRED interviewed Jonathan Nolan and Lisa Joy co-creators of the HBO series Westworld. If you haven’t seen Westworld, I recommend it if only for Anthony Hopkins’ performance. As far as I’m concerned Anthony Hopkins could read the phone book, and I would be spellbound.

At any rate, the article quotes:

“The first season of Westworld wasted no time in going from “hey cool, robots!” to “well, that was bleak.” Death, destruction, android torture—it’s all been there from the pilot onward.”

Which pretty much sums it up. According to Nolan,
“We’re inventing cautionary tales for ourselves…”

“And Joy sees Westworld, and sci-fi in general, as an opportunity to talk about what humanity could or should do if things start to go wrong, especially now that advancements in artificial intelligence technologies are making things like androids seem far more plausible than before. “We’re leaping into the age of the unfathomable, the time when machines [can do things we can’t],”

Joy said.

Westworld’s Creators Know Why Sci-Fi Is So Dystopian

To me, this sounds familiar. It is the essence of my particular brand of design fiction. I don’t always set out to make it dystopian but if we look at the way things seem to naturally evolve, virtually every technology once envisioned as a benefit to humankind ends up with someone misusing it. To look at any potentially transformative tech and not ask, “Transform into what?” is simply irresponsible. We love to sell our ideas on their promise of curing disease, saving lives, and ending suffering, but the technologies that we are designing today have epic downsides that many technologists do not even understand. Misuse happens so often that I’ve begun to see us a reckless if we don’t anticipate these repercussions in advance. It’s the subject of a new paper that I’m working toward.

In the meantime, it’s important that we pay attention and demand that others do, too.

There’s more from the science fiction world on utopias vs. dystopias, and I’ll cover that next week.

 

 

Bookmark and Share

An AI as President?

 

Back on May 19th, before I went on holiday, I promised to comment on an article that appeared that week advocating that we would better off with artificial intelligence (AI) as President of the United States. Joshua Davis authored the piece: Hear me out: Let’s Elect
An AI As President, for the business section of WIRED  online. Let’s start out with a few quotes.

“An artificially intelligent president could be trained to
maximize happiness for the most people without infringing on civil liberties.”

“Within a decade, tens of thousands of people will entrust their daily commute—and their safety—to an algorithm, and they’ll do it happily…The increase in human productivity and happiness will be enormous.”

Let’s start with the word happiness. What is that anyway? I’ve seen it around in several discourses about the future, that somehow we have to start focusing on human happiness above all things, but what makes me happy and what makes you happy may very well be different things. Then there is the frightening idea that it is the job of government to make us happy! There are a lot of folks out there that the government should give us a guaranteed income, pay for our healthcare, and now, apparently, it should also make us happy. If you haven’t noticed from my previous blogs, I am not a progressive. If you believe that government should undertake the happy challenge, you had better hope that their idea of happiness coincides with your own. Gerd Leonhard, a futurist whose work I respect, says that there are two types of happiness: first is hedonic (pleasure) which tends to be temporary, and the other is a eudaimonic happiness which he defines as human flourishing.1 I prefer the latter as it is likely to be more meaningful. Meaning is rather crucial to well-being and purpose in life. I believe that we should be responsible for our happiness. God help us if we leave it up to a machine.

This brings me to my next issue with this insane idea. Davis suggests that by simply not driving, there will be an enormous increase in human productivity and happiness. According to the website overflow data,

“Of the 139,786,639 working individuals in the US, 7,000,722, or about 5.01%, use public transit to get to work according to the 2013 American Communities Survey.”

Are those 7 million working individuals who don’t drive happier and more productive? The survey should have asked, but I’m betting the answer is no. Davis also assumes that everyone will be able to afford an autonomous vehicle. Maybe providing every American with an autonomous vehicle is also the job of the government.

Where I agree with Davis is that we will probably abdicate our daily commute to an algorithm and do it happily. Maybe this is the most disturbing part of his argument. As I am fond of saying, we are sponges for technology, and we often adopt new technology without so much as a thought toward the broader ramifications of what it means to our humanity.

There are sober people out there advocating that we must start to abdicate our decision-making to algorithms because we have too many decisions to make. They are concerned that the current state of affairs is simply too painful for humankind. If you dig into the rationale that these experts are using, many of them are motivated by commerce. Already Google and Facebook and the algorithms of a dozen different apps are telling you what you should buy, where you should eat, who you should “friend” and, in some cases, what you should think. They give you news (real or fake), and they tell you this is what will make you happy. Is it working? Agendas are everywhere, but very few of them have you in the center.

As part of his rationale, Davis cites the proven ability for AI to beat the world’s Go champions over and over and over again, and that it can find melanomas better than board-certified dermatologists.

“It won’t be long before an AI is sophisticated enough to
implement a core set of beliefs in ways that reflect changes in the world. In other words, the time is coming when AIs will have better judgment than most politicians.”

That seems like grounds to elect one as President, right? In fact, it is just another way for us to take our eye off the ball, to subordinate our autonomy to more powerful forces in the belief that technology will save us and make us happier.

Back to my previous point, that’s what is so frightening. It is precisely the kind of argument that people buy into. What if the new AI President decides that we will all be happier if we’re sedated, and then using executive powers makes it law? Forget checks and balances, since who else in government could win an argument against an all-knowing AI? How much power will the new AI President give to other algorithms, bots, and machines?

If we are willing to give up the process of purposeful work to make a living wage in exchange for a guaranteed income, to subordinate our decision-making to have “less to think about,” to abandon reality for a “good enough” simulation, and believe that this new AI will be free of the special interests who think they control it, then get ready for the future.

1. Leonhard, Gerd. Technology vs. Humanity: The Coming Clash between Man and Machine. p112, United Kingdom: Fast Future, 2016. Print.

Bookmark and Share

Are you listening to the Internet of Things? Someone is.

As usual, it is a toss up for what I should write about this week. Is it, WIRED’s article on the artificial womb, FastCo’s article on design thinking, the design fiction world of the movie The Circle, or WIRED’s warning about apps using your phone’s microphone to listen for ultrasonic marketing ‘beacons’ that you can’t hear? Tough call, but I decided on a different WIRED post that talked about the vision of Zuckerberg’s future at F8. Actually, the F8 future is a bit like The Circle anyway so I might be killing two birds with one stone.

At first, I thought the article titled, “Look to Zuck’s F8, Not Trump’s 100 Days, to See the Shape of the Future,” would be just another Trump bashing opportunity, (which I sometimes think WIRED prefers more than writing about tech) but not so. It was about tech, mostly.

The article, written by Zachary Karabell starts out with this quote,

“While the fate of the Trump administration certainly matters, it may shape the world much less decisively in the long-term than the tectonic changes rapidly altering the digital landscape.”

I believe this statement is dead-on, but I would include the entire “technological” landscape. The stage is becoming increasingly “set,” as the article continues,

“At the end of March, both the Senate and the House voted to roll back broadband privacy regulations that had been passed by the Federal Communications Commission in 2016. Those would have required internet service providers to seek customers’ explicit permission before selling or sharing their browsing history.”

Combine that with,

“Facebook[s] vision of 24/7 augmented reality with sensors, camera, and chips embedded in clothing, everyday objects, and eventually the human body…”

and the looming possibility of ending net neutrality, we could be setting ourselves up for the real Circle future.

“A world where data and experiences are concentrated in a handful of companies with what will soon be trillion dollar capitalizations risks being one where freedom gives way to control.”

To add kindling to this thicket, there is the Quantified Self movement (QS). According to their website,

“Our mission is to support new discoveries about ourselves and our communities that are grounded in accurate observation and enlivened by a spirit of friendship.”

Huh? Ok. But they want to do this using “self-tracking tools.” This means sensors. They could be in wearables or implantables or ingestibles. Essentially, they track you. Presumably, this is all so that we become more self-aware, and more knowledgeable about our selves and our behaviors. Health, wellness, anxiety, depression, concentration; the list goes on. Like many emerging movements that are linked to technologies, we open the door through health care or longevity, because it is an easy argument that being healty or fit is better than sick and out of shape. But that is all too simple. QS says that that we gain “self knowledge through numbers,” and in the digital age that means data. In a climate that is increasingly less regulatory about what data can be shared and with whom, this could be the beginings of the perfect storm.

As usual, I hope I’m wrong.

 

 

 

Bookmark and Share

Augmented evidence. It’s a logical trajectory.

A few weeks ago I gushed about how my students killed it at a recent guerrilla future enactment on a ubiquitous Augmented Reality (AR) future. Shortly after that, Mark Zuckerberg announced the Facebook AR platform. The AR uses the camera on your smartphone, and according to a recent WIRED article, transforms your smartphone into an AR engine.

Unfortunately, as we all know, (and so does Zuck), the smartphone isn’t currently much of an engine. AR requires a lot of processing, and so does the AI that allows it to recognize the real world so it can layer additional information on top of it. That’s why Facebook (and others), are building their own neural network chips so that the platform doesn’t have to run to the Cloud to access the processing required for Artificial Intelligence (AI). That will inevitably happen which will make the smartphone experience more seamless, but that’s just part the challenge for Facebook.

If you add to that the idea that we become even more dependent on looking at our phones while we are walking or worse, driving, (think Pokemon GO), then this latest announcement is, at best, foreshadowing.

As the WIRED article continues, tech writer Brian Barrett talked to Blair MacIntyre, from Georgia Tech who says,

“The phone has generally sucked for AR because holding it up and looking through it is tiring, awkward, inconvenient, and socially unacceptable,” says MacIntyre. Adding more of it doesn’t solve those issues. It exacerbates them. (The exception might be the social acceptability part; as MacIntyre notes, selfies were awkward until they weren’t.)”

That last part is an especially interesting point. I’ll have to come back to that in another post.

My students did considerable research on exactly this kind of early infancy that technologies undergo on their road to ubiquity. In another WIRED article, even Zuckerberg admitted,

“We all know where we want this to get eventually,” said Zuckerberg in his keynote. “We want glasses, or eventually contact lenses, that look and feel normal, but that let us overlay all kinds of information and digital objects on top of the real world.”

So there you have it. Glasses are the end game, but as my students agreed, contact lenses not so much. Think about it. If you didn’t have to stick a contact lens in your eyeball, you wouldn’t and the idea that they could become ubiquitous (even if you solved the problem of computing inside a wafer thin lens and the myriad of problems with heat, and in-eye-time), they are much farther away, if ever.

Student design team from Ohio State’s Collaborative Studio.

This is why I find my student’s solution so much more elegant and a far more logical trajectory. According to Barrett,

“The optimistic timeline for that sort of tech, though, stretches out to five or 10 years. In the meantime, then, an imperfect solution takes the stage.”

My students locked it down to seven years.

Finally, Zuckerberg made this statement:

“Augmented reality is going to help us mix the digital and physical in all new ways,” said Zuckerberg at F8. “And that’s going to make our physical reality better.”

Except that Zuck’s version of better and mine or yours may not be the same. Exactly what is wrong with reality anyway?

If you want to see the full-blown presentation of what my students produced, you can view it at aughumana.net.

Note: Currently the AugHumana experience is superior on Google Chrome.  If you are a Safari or Firefox purest, you may have to wait for the page to load (up to 2 minutes). We’re working on this. So, just use Chrome this time. We hope to have it fixed soon.

Bookmark and Share

“At a certain point…”

 

A few weeks ago Brian Barrett of WIRED magazine reported on an “NEW SURVEILLANCE SYSTEM MAY LET COPS USE ALL OF THE CAMERAS.” According to the article,

“Computer scientists have created a way of letting law enforcement tap any camera that isn’t password protected so they can determine where to send help or how to respond to a crime.”

Barrett suggests that America has 30 million surveillance cameras out there. The above sentence, for me, is loaded. First of all, as with most technological advancements, they are always couched in the most benevolent form. These scientists are going to help law enforcement send help or respond to crimes. This is also the argument that the FBI used to try to force Apple to provide a backdoor to the iPhone. It was for the common good.

If you are like me, you immediately see a giant red flag waving to warn us of the gaping possibility for abuse. However, we can take heart to some extent. The sentence mentioned above also limits law enforcement access to, “any camera that isn’t password protected.” Now the question is: What percentage of the 30 million cameras are password protected? Does it include, for example, more than kennel cams or random weather cams? Does it include the local ATM, traffic, and other security cameras? The system is called CAM2.

“…CAM2 reveals the location and orientation of public network cameras, like the one outside your apartment.”

It can aggregate the cameras in a given area and allow law enforcement to access them. Hmm.

Last week I teased that some of the developments that I reserved for 25, 50 or even further into the future, through my graphic novel The Lightstream Chronicles, are showing signs of life in the next two or three years. A universal “cam” system like this is one of them; the idea of ubiquitous surveillance or the mesh only gets stronger with more cameras. Hence the idea behind my ubiquitous surveillance blog. If there is a system that can identify all of the “public network” cams, how far are we from identifying all of the “private network” cams? How long before these systems are hacked? Or, in the name of national security, how might these systems be appropriated? You may think this is the stuff of sci-fi, but it is also the stuff of design-fi, and design-fi, as I explained last week, is intended to make us think; about how these things play out.

In closing, WIRED’s Barrett raised the issue of the potential for abusing systems such as CAM2 with Gautam Hans, policy counsel at the Center for Democracy & Technology. And, of course, we got the standard response:

“It’s not the best use of our time to rail against its existence. At a certain point, we need to figure out how to use it effectively, or at least with extensive oversight.”

Unfortunately, history has shown that that certain point usually arrives after something goes egregiously wrong. Then someone asks, “How could something like this happen?”

Bookmark and Share

The end of code.

 

This week WIRED Magazine released their June issue announcing the end of code. That would mean that the ability to write code, as is so cherished in the job world right now, is on the way out. They attribute this tectonic shift to Artificial Intelligence, machine learning, neural networks and the like. In the future (which is taking place now) we won’t have to write code to tell computers what to do, we will just have to teach them. I have been over this before through a number of previous writings. An example: Facebook uses a form of machine learning by collecting data from millions of pictures that are posted on the social network. When someone loads a group photo and identifies the people in the shot, Facebook’s AI remembers it by logging the prime coordinates on a human face and attributing them to that name (aka facial recognition). If the same coordinates show up again in another post, Facebook identifies it as you. People load the data (on a massive scale), and the machine learns. By naming the person or persons in the photo, you have taught the machine.

The WIRED article makes some interesting connections about the evolution of our thinking concerning the mind, about learning, and how we have taken a circular route in our reasoning. In essence, the mind was once considered a black box; there was no way to figure it out, but you could condition responses, a la Pavlov’s Dog. That logic changes with cognitive science which is the idea that the brain is more like a computer. The computing analogy caught on, and researchers began to see the whole idea of thought, memory, and thinking as stuff you could code, or hack, just like a computer. Indeed, it is this reasoning that has led to the notion that DNA is, in fact, codable, hence splicing through Crispr. If it’s all just code, we can make anything. That was the thinking. Now there is machine learning and neural networks. You still code, but only to set up the structure by which the “thing” learns, but after that, it’s on its own. The result is fractal and not always predictable. You can’t go back in and hack the way it is learning because it has started to generate a private math—and we can’t make sense of it. In other words, it is a black box. We have, in effect, stymied ourselves.

There is an upside. To train a computer you used to have to learn how to code. Now you just teach it by showing or giving it repetitive information, something anyone can do, though, at this point, some do it better than others.

Always the troubleshooter, I wonder what happens when we—mystified at a “conclusion” or decision arrived at by the machine—can’t figure out how to make it stop arriving at that conclusion. You can do the math.

Do we just turn it off?

Bookmark and Share

A facebook of a different color.

The tech site Ars Technica recently ran an article on the proliferation of a little-known app called Facewatch. According to the articles writer Sebastian Anthony, “Facewatch is a system that lets retailers, publicans, and restaurateurs easily share private CCTV footage with the police and other Facewatch users. In theory, Facewatch lets you easily report shoplifters to the police, and to share the faces of generally unpleasant clients, drunks, etc. with other Facewatch users.” The idea is that retailers or officials can look out for these folks and either keep an eye on them or just ask them to leave. The system, in use in the UK, appears to have a high rate of success.

 

The story continues. Of course, all technologies eventually converge, so now you don’t have to “keep and eye out” for ner-do-wells your CCTV can do it for you. NeoFace from NEC works with the Facewatch list to do the scouting for you. According to NECs website: “NEC’s NeoFace Watch solution is specifically designed to integrate with existing surveillance systems by extracting faces in real time… and matching against a watch list of individuals.” In this case, it would be the Facewatch database. Ars’ Anthony, makes this connection: “In the film Minority Report, people are rounded up by the Precrime police agency before they actually commit the crime…with Facewatch, and you pretty much have the same thing: a system that automatically tars people with a criminal brush, irrespective of dozens of important variables.”

Anthony points out that,

“Facewatch lets you share ‘subjects of interest’ with other Facewatch users even if they haven’t been convicted. If you look at the shop owner in a funny way, or ask for the service charge to be removed from your bill, you might find yourself added to the ‘subject of interest’ list.”

The odds of an innocent being added to the watchlist are quite good. Malicious behavior aside, you could be logged as you wander past a government protest, forget your PIN number too many times at the ATM, or simply look too creepy in your Ray Bans and hoody.

The story underscores a couple of my past rants. First, we don’t make laws to protect against things that are impossible, so when the impossible happens, we shouldn’t be surprised that there isn’t a law to protect against it.1 It is another red flag that technology is moving, too fast and as it converges with other technologies it becomes radically unpredictable. Second, that technology moves faster than politics, moves faster than policy, and often faster than ethics.2

There are a host personal apps, many which are available to our iPhones or Androids that are on the precarious line between legal and illegal, curious and invasive. And there are more to come.

 

1 Quoting Selinger from Wood, David. “The Naked Future — A World That Anticipates Your Every Move.” YouTube. YouTube, 15 Dec. 2013. Web. 13 Mar. 2014.
2. Quoting Richards from Farivar, Cyrus. “DOJ Calls for Drone Privacy Policy 7 Years after FBI’s First Drone Launched.” Ars Technica. September 27, 2013. Accessed March 13, 2014. http://arstechnica.com/tech-policy/2013/09/doj-calls-for-drone-privacy-policy-7-years-after-fbis-first-drone-launched/.
Bookmark and Share

The foreseeable future.

From my perspective, the two most disruptive technologies of the next ten years will be a couple of acronyms: VR and AI. Virtual Reality will transform the way people learn, and their diversions. It will play an increasing role in entertainment and gaming to the extent that many will experience some confusion and conflict with actual reality. Make sure you see last week’s blog for more on this. Between VR and AI so much is happening that these could easily outnumber a host of other topics to discuss on this site next year. Today, I’ll begin the discussion with AI, but both technologies fall into my broader topic of the foreseeable future.

One of my favorite quotes of 2014 (seems like ancient history now) was from an article in Ars Technica by Cyrus Farivar 1. It was a drone story about FBI proliferation to the tune of $5 million that occurred gradually over the period of 10 years, almost unnoticed. Farivar cites a striking quote from Neil Richards, a law professor at Washington University in St. Louis: “We don’t write laws to protect against impossible things, so when the impossible becomes possible, we shouldn’t be surprised that the law doesn’t protect against it…” I love that quote because we are continually surprised that we did not anticipate one thing or the other. Much of this surprise I believe, comes from experts who tell us that this or that won’t happen in the foreseeable future. One of these experts, Miles Brundage, a Ph.D. student at Arizona State, was quoted recently in an article in WIRED. About AI that could surpass human intelligence, Brundage said,

“At the point where we are today, no AI system is at all capable of taking over the world—and won’t be for the foreseeable future.”

There are two things that strike me about these kinds of statements. First is the obvious fact that no one can see the future in the first place, and secondly that the clear implication is, that it will happen, just not yet. It also suggests that we shouldn’t be concerned; it’s too far away. This article was about Elon Musk is open-sourcing something called OpenAI. According to Nathaniel Wood reporting for WIRED, OpenAI is deep-learning code that Musk and his investors want to share with the world, for free. This news comes on the heels of Google’s open-sourcing of their AI code called TensorFlow, immediately followed by a Facebook announcement that they would be sharing their BigSur server hardware. As the article points out, this is not all magnanimous altruism. By opening the door to formerly proprietary software or hardware folks like Musk and companies like Google and Facebook stand to gain. They gain by recruiting talent, and by exponentially increasing development through free outsourcing. A thousand people working with your code are much better than the hundreds inside your building. Here are two very important factors that folks like Brundage don’t take into consideration. First, these people are in a race and, through outsourcing or open-sourcing their stuff they are enlisting people to help them in the race. Secondly, there is that term, exponential. I use it most often when I refer to Kurzweil’s Law of Accelerating Returns. It is exactly these kinds of developments that make his prediction so believable. So maybe the foreseeable future is not that far away after all.

All this being said the future is not foreseeable, and the exponential growth in areas like VR and AI will continue. The WIRED article continues with this commentary on AI, (which we all know):

“Deep learning relies on what are called neural networks, vast networks of software and hardware that approximate the web of neurons in the human brain. Feed enough photos of a cat into a neural net, and it can learn to recognize a cat. Feed it enough human dialogue, and it can learn to carry on a conversation. Feed it enough data on what cars encounter while driving down the road and how drivers react, and it can learn to drive.”

Despite their benevolence, this is why Musk and Facebook and Google are in the race. Musk is quick to add that while his motives have an air of transparency to them, it is also true that the more people who have access to deep-learning software, the less likely that one guy will have a monopoly on it.

Musk is a smart guy. He knows that AI could be a blessing or a curse. Open sourcing is his hedge. It could be a good thing… for the foreseeable future.

 

1. Farivar, Cyrus. “DOJ Calls for Drone Privacy Policy 7 Years after FBI’s First Drone Launched.” Ars Technica. September 27, 2013. Accessed March 13, 2014. http://arstechnica.com/tech-policy/2013/09/doj-calls-for-drone-privacy-policy-7-years-after-fbis-first-drone-launched/.
Bookmark and Share

The ultimate wild card.

 

One of the things that futurists do when they imagine what might happen down the road is to factor in the wild card. Short of the sports or movie references a wild card is defined by dictionary.com as: “… of, being, or including an unpredictable or unproven element, person, item, etc.” One might use this term to say, “Barring a wild card event like a meteor strike, global thermonuclear war, or a massive earthquake, we can expect Earth’s population to grow by (x) percent.”

The thing about wild card events is that they do happen. 9/11 could be considered a wild card. Chernobyl, Fukushima, and Katrina would also fall into this category. At the core, they are unpredictable, and their effects are widespread. There are think tanks that work on the probabilities of these occurrences and then play with scenarios for addressing them.

I’m not sure what to call something that would be entirely predictable but that we still choose to ignore. Here I will go with a quote:

“The depravity of man is at once the most empirically verifiable reality but at the same time the most intellectually resisted fact.”

― Malcolm Muggeridge

Some will discount this automatically because the depravity of man refers to the Christian theology that without God, our nature is hopeless. Or as Jeremiah would say, our heart is “deceitful and desperately wicked” (Jeremiah 17:9).

If you don’t believe in that, then maybe you are willing to accept a more secular notion that man can be desperately stupid. To me, humanity’s uncanny ability to foul things up is the recurring (not-so) wild card. It makes all new science as much a potential disaster as it might be a panacea. We don’t consider it often enough. If we look back through my previous blogs from Transhumanism to genetic design, this threat looms large. You can call me a pessimist if you want, but the video link below stands as a perfect example of my point. It is a compilation of all the nuclear tests, atmospheric, underground, and underwater, since 1945. Some of you might think that after a few tests and the big bombs during WWII we decided to keep a lid on the insanity. Nope.

If you can watch the whole thing without sinking into total depression and reaching for the Clorox, you’re stronger than I am. And, sadly it continues. We might ask how we have survived this long.

Bookmark and Share