Tag Archives: Facebook

An AI as President?

 

Back on May 19th, before I went on holiday, I promised to comment on an article that appeared that week advocating that we would better off with artificial intelligence (AI) as President of the United States. Joshua Davis authored the piece: Hear me out: Let’s Elect
An AI As President, for the business section of WIRED  online. Let’s start out with a few quotes.

“An artificially intelligent president could be trained to
maximize happiness for the most people without infringing on civil liberties.”

“Within a decade, tens of thousands of people will entrust their daily commute—and their safety—to an algorithm, and they’ll do it happily…The increase in human productivity and happiness will be enormous.”

Let’s start with the word happiness. What is that anyway? I’ve seen it around in several discourses about the future, that somehow we have to start focusing on human happiness above all things, but what makes me happy and what makes you happy may very well be different things. Then there is the frightening idea that it is the job of government to make us happy! There are a lot of folks out there that the government should give us a guaranteed income, pay for our healthcare, and now, apparently, it should also make us happy. If you haven’t noticed from my previous blogs, I am not a progressive. If you believe that government should undertake the happy challenge, you had better hope that their idea of happiness coincides with your own. Gerd Leonhard, a futurist whose work I respect, says that there are two types of happiness: first is hedonic (pleasure) which tends to be temporary, and the other is a eudaimonic happiness which he defines as human flourishing.1 I prefer the latter as it is likely to be more meaningful. Meaning is rather crucial to well-being and purpose in life. I believe that we should be responsible for our happiness. God help us if we leave it up to a machine.

This brings me to my next issue with this insane idea. Davis suggests that by simply not driving, there will be an enormous increase in human productivity and happiness. According to the website overflow data,

“Of the 139,786,639 working individuals in the US, 7,000,722, or about 5.01%, use public transit to get to work according to the 2013 American Communities Survey.”

Are those 7 million working individuals who don’t drive happier and more productive? The survey should have asked, but I’m betting the answer is no. Davis also assumes that everyone will be able to afford an autonomous vehicle. Maybe providing every American with an autonomous vehicle is also the job of the government.

Where I agree with Davis is that we will probably abdicate our daily commute to an algorithm and do it happily. Maybe this is the most disturbing part of his argument. As I am fond of saying, we are sponges for technology, and we often adopt new technology without so much as a thought toward the broader ramifications of what it means to our humanity.

There are sober people out there advocating that we must start to abdicate our decision-making to algorithms because we have too many decisions to make. They are concerned that the current state of affairs is simply too painful for humankind. If you dig into the rationale that these experts are using, many of them are motivated by commerce. Already Google and Facebook and the algorithms of a dozen different apps are telling you what you should buy, where you should eat, who you should “friend” and, in some cases, what you should think. They give you news (real or fake), and they tell you this is what will make you happy. Is it working? Agendas are everywhere, but very few of them have you in the center.

As part of his rationale, Davis cites the proven ability for AI to beat the world’s Go champions over and over and over again, and that it can find melanomas better than board-certified dermatologists.

“It won’t be long before an AI is sophisticated enough to
implement a core set of beliefs in ways that reflect changes in the world. In other words, the time is coming when AIs will have better judgment than most politicians.”

That seems like grounds to elect one as President, right? In fact, it is just another way for us to take our eye off the ball, to subordinate our autonomy to more powerful forces in the belief that technology will save us and make us happier.

Back to my previous point, that’s what is so frightening. It is precisely the kind of argument that people buy into. What if the new AI President decides that we will all be happier if we’re sedated, and then using executive powers makes it law? Forget checks and balances, since who else in government could win an argument against an all-knowing AI? How much power will the new AI President give to other algorithms, bots, and machines?

If we are willing to give up the process of purposeful work to make a living wage in exchange for a guaranteed income, to subordinate our decision-making to have “less to think about,” to abandon reality for a “good enough” simulation, and believe that this new AI will be free of the special interests who think they control it, then get ready for the future.

1. Leonhard, Gerd. Technology vs. Humanity: The Coming Clash between Man and Machine. p112, United Kingdom: Fast Future, 2016. Print.

Bookmark and Share

Are you listening to the Internet of Things? Someone is.

As usual, it is a toss up for what I should write about this week. Is it, WIRED’s article on the artificial womb, FastCo’s article on design thinking, the design fiction world of the movie The Circle, or WIRED’s warning about apps using your phone’s microphone to listen for ultrasonic marketing ‘beacons’ that you can’t hear? Tough call, but I decided on a different WIRED post that talked about the vision of Zuckerberg’s future at F8. Actually, the F8 future is a bit like The Circle anyway so I might be killing two birds with one stone.

At first, I thought the article titled, “Look to Zuck’s F8, Not Trump’s 100 Days, to See the Shape of the Future,” would be just another Trump bashing opportunity, (which I sometimes think WIRED prefers more than writing about tech) but not so. It was about tech, mostly.

The article, written by Zachary Karabell starts out with this quote,

“While the fate of the Trump administration certainly matters, it may shape the world much less decisively in the long-term than the tectonic changes rapidly altering the digital landscape.”

I believe this statement is dead-on, but I would include the entire “technological” landscape. The stage is becoming increasingly “set,” as the article continues,

“At the end of March, both the Senate and the House voted to roll back broadband privacy regulations that had been passed by the Federal Communications Commission in 2016. Those would have required internet service providers to seek customers’ explicit permission before selling or sharing their browsing history.”

Combine that with,

“Facebook[s] vision of 24/7 augmented reality with sensors, camera, and chips embedded in clothing, everyday objects, and eventually the human body…”

and the looming possibility of ending net neutrality, we could be setting ourselves up for the real Circle future.

“A world where data and experiences are concentrated in a handful of companies with what will soon be trillion dollar capitalizations risks being one where freedom gives way to control.”

To add kindling to this thicket, there is the Quantified Self movement (QS). According to their website,

“Our mission is to support new discoveries about ourselves and our communities that are grounded in accurate observation and enlivened by a spirit of friendship.”

Huh? Ok. But they want to do this using “self-tracking tools.” This means sensors. They could be in wearables or implantables or ingestibles. Essentially, they track you. Presumably, this is all so that we become more self-aware, and more knowledgeable about our selves and our behaviors. Health, wellness, anxiety, depression, concentration; the list goes on. Like many emerging movements that are linked to technologies, we open the door through health care or longevity, because it is an easy argument that being healty or fit is better than sick and out of shape. But that is all too simple. QS says that that we gain “self knowledge through numbers,” and in the digital age that means data. In a climate that is increasingly less regulatory about what data can be shared and with whom, this could be the beginings of the perfect storm.

As usual, I hope I’m wrong.

 

 

 

Bookmark and Share

Augmented evidence. It’s a logical trajectory.

A few weeks ago I gushed about how my students killed it at a recent guerrilla future enactment on a ubiquitous Augmented Reality (AR) future. Shortly after that, Mark Zuckerberg announced the Facebook AR platform. The AR uses the camera on your smartphone, and according to a recent WIRED article, transforms your smartphone into an AR engine.

Unfortunately, as we all know, (and so does Zuck), the smartphone isn’t currently much of an engine. AR requires a lot of processing, and so does the AI that allows it to recognize the real world so it can layer additional information on top of it. That’s why Facebook (and others), are building their own neural network chips so that the platform doesn’t have to run to the Cloud to access the processing required for Artificial Intelligence (AI). That will inevitably happen which will make the smartphone experience more seamless, but that’s just part the challenge for Facebook.

If you add to that the idea that we become even more dependent on looking at our phones while we are walking or worse, driving, (think Pokemon GO), then this latest announcement is, at best, foreshadowing.

As the WIRED article continues, tech writer Brian Barrett talked to Blair MacIntyre, from Georgia Tech who says,

“The phone has generally sucked for AR because holding it up and looking through it is tiring, awkward, inconvenient, and socially unacceptable,” says MacIntyre. Adding more of it doesn’t solve those issues. It exacerbates them. (The exception might be the social acceptability part; as MacIntyre notes, selfies were awkward until they weren’t.)”

That last part is an especially interesting point. I’ll have to come back to that in another post.

My students did considerable research on exactly this kind of early infancy that technologies undergo on their road to ubiquity. In another WIRED article, even Zuckerberg admitted,

“We all know where we want this to get eventually,” said Zuckerberg in his keynote. “We want glasses, or eventually contact lenses, that look and feel normal, but that let us overlay all kinds of information and digital objects on top of the real world.”

So there you have it. Glasses are the end game, but as my students agreed, contact lenses not so much. Think about it. If you didn’t have to stick a contact lens in your eyeball, you wouldn’t and the idea that they could become ubiquitous (even if you solved the problem of computing inside a wafer thin lens and the myriad of problems with heat, and in-eye-time), they are much farther away, if ever.

Student design team from Ohio State’s Collaborative Studio.

This is why I find my student’s solution so much more elegant and a far more logical trajectory. According to Barrett,

“The optimistic timeline for that sort of tech, though, stretches out to five or 10 years. In the meantime, then, an imperfect solution takes the stage.”

My students locked it down to seven years.

Finally, Zuckerberg made this statement:

“Augmented reality is going to help us mix the digital and physical in all new ways,” said Zuckerberg at F8. “And that’s going to make our physical reality better.”

Except that Zuck’s version of better and mine or yours may not be the same. Exactly what is wrong with reality anyway?

If you want to see the full-blown presentation of what my students produced, you can view it at aughumana.net.

Note: Currently the AugHumana experience is superior on Google Chrome.  If you are a Safari or Firefox purest, you may have to wait for the page to load (up to 2 minutes). We’re working on this. So, just use Chrome this time. We hope to have it fixed soon.

Bookmark and Share

Election lessons. Beware who you ignore.

It was election week here in America, but unless you’ve been living under a rock for the last eight months, you already know that. Not unlike the Brexit vote from earlier this year, a lot of people were genuinely surprised by the outcome. Perhaps most surprising to me is that the people who seem to be the most surprised are the people who claimed to know—for certain—that the outcome would be otherwise. Why do you suppose that is? There is a lot of finger-pointing and head-scratching going on but from what I’ve seen so far none of these so-called experts has a clue why they were wrong.

Most of them are blaming polls for their miscalculations. And it’s starting to look like their error came not in who they polled but who they thought irrelevant and ignored. Many in the media are in denial that their efforts to shape the election may have even fueled the fire for the underdog. What has become of American Journalism is shameful. Wikileaks proves that ninety percent of the media was kissing up to the left, with pre-approved interviews, stories and marching orders to “shape the narrative.” I don’t care who you were voting for, that kind of collusion is a disgrace for democracy. Call it Pravda. But I don’t want to turn this blog into a political commentary, but it was amusing to watch them all wearing the stupid hat on Wednesday morning. What I do want to talk about, however, is how we look at data to reach a conclusion.

In a morning-after article from the LinkedIn network, futurist Peter Diamandis posted the topic, “Here’s what election campaign marketing will look like in 2020.” It was less about the election and more about future tech with an occasional reference to the election and campaign processes. He has five predictions. First is, the news flash that “Social media will have continued to explode. [and that] The single most important factor influencing your voting decision is your social network.” Diamandis says that “162 million people log onto Facebook at least once a month.” I agree with the first part of his statement but what about the people the other 50% and those that don’t share their opinions on politics. A lot of pollsters are looking at the huge disparity in projections vs. actuals in the 2016 election. They are acknowledging that a lot of people simply weren’t forthcoming in pre-election polling. Those planning to vote Trump, for example, knew that Trump was a polarizing figure and they weren’t going to get into it with their friends on social media or even a stranger taking a poll. Then, I’m willing to bet that a lot of voters who put the election over the top are in the fifty percent that isn’t on social media. Just look at the demographics for social media.

Peter Diamandis is a brilliant guy, and I’m not here to pick on him. Many of his predictions are quite conceivable. Mostly he’s talking about an increase in data mining, and AI is getting better at learning from it, with a laser focus on the individual. If you add this together with programmable avatars, facial recognition improvements and the Internet of Things, the future means that we are all going to be tracked with increasing levels of detail. And though our face is probably not something we can keep secret, if it all creeps you out, remember that much of this is based on what we choose to share. Fortunately, it will take a little bit longer than 2020 for all of these new technologies to read our minds—so until then we still hold the cards. As long as you don’t share our most private thoughts on social media or with pollsters, you’ll keep them guessing.

Bookmark and Share

Future proof.

 

There is no such thing as future proof anything, of course, so I use the term to refer to evidence that a current idea is becoming more and more probable of something we will see in the future. The evidence I am talking about surfaced in a FastCo article this week about biohacking and the new frontier of digital implants. Biohacking has a loose definition and can reference using genetic material without regard to ethical procedures, to DIY biology, to pseudo-bioluminescent tattoos, to body modification for functional enhancement—see transhumanism. Last year, my students investigated this and determined that a society willing to accept internal implants was not a near-future scenario. Nevertheless, according to FastCo author Steven Melendez,

“a survey released by Visa last year that found that 25% of Australians are ‘at least slightly interested’ in paying for purchases through a chip implanted in their bodies.”

Melendez goes on to describe a wide variety of implants already in use for medical, artistic and personal efficiency and interviews Tim Shank, president of a futurist group called TwinCities+. Shank says,

“[For] people with Android phones, I can just tap their phone with my hand, right over the chip, and it will send that information to their phone..”

implants
Amal Graafstra’s Hands [Photo: courtesy of Amal Graafstra] c/o WIRED
The popularity of body piercings and tattoos— also once considered as invasive procedures—has skyrocketed. Implantable technology, especially as it becomes more functionally relevant could follow a similar curve.

I saw this coming some years ago when writing The Lightstream Chronicles. The story, as many of you know, takes place in the far future where implantable technology is mundane and part of everyday life. People regulate their body chemistry access the Lightstream (the evolved Internet) and make “calls” using their fingertips embedded with Luminous Implants. These future implants talk directly to implants in the brain, and other systemic body centers to make adjustments or provide information.

An ad for Luminous Implants, and the "tap" numbers for local attractions.
An ad for Luminous Implants, and the “tap” numbers for local attractions.
Bookmark and Share

When the stakes are low, mistakes are beneficial. In more weighty pursuits, not so much.

 

I’m from the old school. I suppose, that sentence alone makes me seem like a codger. Let’s call it the eighties. Part of the art of problem solving was to work toward a solution and get it as tight as we possibly could before we committed to implementation. It was called the design process and today it’s called “design thinking.” So it was heresy to me when I found myself, some years ago now, in a high-tech corporation where this was a doctrine ignored. I recall a top-secret, new product meeting in which the owner and chief technology officer said, “We’re going to make some mistakes on this, so let’s hurry up and make them.” He was not speaking about iterative design, which is part and parcel of the design process, he was talking about going to market with the product and letting the users illuminate what we should fix. Of course, the product was safe and met all the legal standards, but it was far from polished. The idea was that mass consumer trial-by-fire would provide us with an exponentially higher data return than if we tested all the possible permutations in a lab at headquarters. He was, apparently, ahead of his time.

In a recent FastCo article on Facebook’s race to be the leader in AI, author Daniel Terdiman cites some of Mark Zuckerberg’s mantras: “‘Move fast and break things,’ or ‘Done is better than perfect.’” We can debate this philosophically or maybe even ethically, but it is clearly today’s standard procedure for new technologies, new science and the incessant race to be first. Here is a quote from that article:

“Artificial intelligence has become a vital part of scaling Facebook. It’s already being used to recognize the faces of your friends in photographs, and curate your newsfeed. DeepText, an engine for reading text that was unveiled last week, can understand “with near-human accuracy” the content in thousands of posts per second, in more than 20 different languages. Soon, the text will be translated into a dozen different languages, automatically. Facebook is working on recognizing your voice and identifying people inside of videos so that you can fast forward to the moment when your friend walks into view.”

The story goes on to say that Facebook, though it is pouring tons of money into AI, is behind the curve, having begun only three or so years ago. Aside from the fact that FB’s accomplishments seem fairly impressive (at least to me), people like Google and Microsoft are apparently way ahead. In the case of Microsoft, the effort began more than twenty years ago.

Today, the hurry up is accelerated by open sourcingWikipedia explains the benefits of open sourcing as:

“The open-source model, or collaborative development from multiple independent sources, generates an increasingly more diverse scope of design perspective than any one company is capable of developing and sustaining long term.”

The idea behind open sourcing is that the mistakes will happen even faster along with the advancements. It is becoming the de facto approach to breakthrough technologies. If fast is the primary, maybe even the only goal, it is a smart strategy. Or is it a touch short sighted? As we know, not everyone who can play with the code that a company has given them has that company’s best interests in mind. As for the best interests of society, I’m not sure those are even on the list.

To examine our motivations and the ripples that emanate from them, of course, is my mission with design fiction and speculative futures. Whether we like it or not, a by-product of technological development—aside from utopia—is human behavior. There are repercussions from the things we make and the systems that evolve from them. When your mantra is “Move fast and break things,” that’s what you’ll get. But there is certainly no time the move-fast loop to consider the repercussions of your actions, or the unexpected consequences. Consequences will appear all by themselves.

The technologists tell us that when we reach the holy grail of AI (whatever that is), we will be better people and solve the world’s most challenging problems. But in reality, it’s not that simple. With the nuances of AI, there are potential problems, or mistakes, that could be difficult to fix; new predicaments that humans might not be able to solve and AI may not be inclined to resolve on our behalf.

In the rush to make mistakes, how grave will they be? And, who is responsible?

Bookmark and Share

Design fiction. I want to believe.

 

I have blogged in the past about logical succession. When it comes to creating realistic design fiction narrative, there needs to be a sense of believability. Coates1 calls this “plausible reasoning.”, “[…]putting together what you know to create a path leading to one or several new states or conditions, at a distance in time.” In other words, for the audience to suspend their disbelief, there has to be a basic understanding of how we got here. If you depict something that is too fantastic, your audience won’t buy it, especially if you are trying to say that, “This could happen.”

“When design fictions are conceivable and realistically executed they carry a greater potential for making an impact and focusing discussion and debate around these future scenarios.”2

In my design futures collaborative studio, I ask students to do a rigorous investigation of future technologies, the ones that are on the bleeding edge. Then I want them to ask, “What if?” It is easier said than done. Particularly because of technological convergence, the way technologies merge with other technologies to form heretofore unimagined opportunities.

There was an article this week in Wired Magazine concerning a company called Magic Leap. They are in the MR business, mixed reality as opposed to virtual reality. With MR, the virtual imagery happens within the space you’re in—in front of your eyes—rather than in an entirely virtual space. The demo from Wired’s site is pretty convincing. The future of MR and VR, for me, are easy to predict. Will it get more realistic? Yes. Will it get cheaper, smaller, and ubiquitous? Yes. At this point, a prediction like this is entirely logical. Twenty-five years ago it would not have been as easy to imagine.

As the Wired article states,

“[…]the arrival of mass-market VR wasn’t imminent.[…]Twenty-five years later a most unlikely savior emerged—the smartphone! Its runaway global success drove the quality of tiny hi-res screens way up and their cost way down. Gyroscopes and motion sensors embedded in phones could be borrowed by VR displays to track head, hand, and body positions for pennies. And the processing power of a modern phone’s chip was equal to an old supercomputer, streaming movies on the tiny screen with ease.”

To have predicted that VR would be where it is today with billions of dollars pouring into fledgling technologies and realistic, and utterly convincing demonstrations would have been illogical. It would have been like throwing a magnet into a bucket of nails, rolling it around and guessing which nails would end up coming out attached.

What is my point? I think it is important to remind ourselves that things will move blindingly fast particularly when companies like Google and Facebook are throwing money at them. Then, the advancement of one only adds to the possibilities of the next iteration possibly in ways that no one can predict. As VR or MR merges with biotech or artificial reality, or just about anything else you can imagine, the possibilities are endless.

Unpredictable technology makes me uncomfortable. Next week I’ll tell you why.

 

  1. Coates, J.F., 2010. The future of foresight—A US perspective. Technological Forecasting & Social Change 77, 1428–1437.
  2. E. Scott Denison. “Timed-release Design Fiction: A Digital Online Methodology to Provoke Reflection on our Socio- Technological Future.”  Edited by Michal Derda Nowakowski. ISBN: 978-1-84888-427-4 Interdisciplinary.net.
Bookmark and Share

A facebook of a different color.

The tech site Ars Technica recently ran an article on the proliferation of a little-known app called Facewatch. According to the articles writer Sebastian Anthony, “Facewatch is a system that lets retailers, publicans, and restaurateurs easily share private CCTV footage with the police and other Facewatch users. In theory, Facewatch lets you easily report shoplifters to the police, and to share the faces of generally unpleasant clients, drunks, etc. with other Facewatch users.” The idea is that retailers or officials can look out for these folks and either keep an eye on them or just ask them to leave. The system, in use in the UK, appears to have a high rate of success.

 

The story continues. Of course, all technologies eventually converge, so now you don’t have to “keep and eye out” for ner-do-wells your CCTV can do it for you. NeoFace from NEC works with the Facewatch list to do the scouting for you. According to NECs website: “NEC’s NeoFace Watch solution is specifically designed to integrate with existing surveillance systems by extracting faces in real time… and matching against a watch list of individuals.” In this case, it would be the Facewatch database. Ars’ Anthony, makes this connection: “In the film Minority Report, people are rounded up by the Precrime police agency before they actually commit the crime…with Facewatch, and you pretty much have the same thing: a system that automatically tars people with a criminal brush, irrespective of dozens of important variables.”

Anthony points out that,

“Facewatch lets you share ‘subjects of interest’ with other Facewatch users even if they haven’t been convicted. If you look at the shop owner in a funny way, or ask for the service charge to be removed from your bill, you might find yourself added to the ‘subject of interest’ list.”

The odds of an innocent being added to the watchlist are quite good. Malicious behavior aside, you could be logged as you wander past a government protest, forget your PIN number too many times at the ATM, or simply look too creepy in your Ray Bans and hoody.

The story underscores a couple of my past rants. First, we don’t make laws to protect against things that are impossible, so when the impossible happens, we shouldn’t be surprised that there isn’t a law to protect against it.1 It is another red flag that technology is moving, too fast and as it converges with other technologies it becomes radically unpredictable. Second, that technology moves faster than politics, moves faster than policy, and often faster than ethics.2

There are a host personal apps, many which are available to our iPhones or Androids that are on the precarious line between legal and illegal, curious and invasive. And there are more to come.

 

1 Quoting Selinger from Wood, David. “The Naked Future — A World That Anticipates Your Every Move.” YouTube. YouTube, 15 Dec. 2013. Web. 13 Mar. 2014.
2. Quoting Richards from Farivar, Cyrus. “DOJ Calls for Drone Privacy Policy 7 Years after FBI’s First Drone Launched.” Ars Technica. September 27, 2013. Accessed March 13, 2014. http://arstechnica.com/tech-policy/2013/09/doj-calls-for-drone-privacy-policy-7-years-after-fbis-first-drone-launched/.
Bookmark and Share

The foreseeable future.

From my perspective, the two most disruptive technologies of the next ten years will be a couple of acronyms: VR and AI. Virtual Reality will transform the way people learn, and their diversions. It will play an increasing role in entertainment and gaming to the extent that many will experience some confusion and conflict with actual reality. Make sure you see last week’s blog for more on this. Between VR and AI so much is happening that these could easily outnumber a host of other topics to discuss on this site next year. Today, I’ll begin the discussion with AI, but both technologies fall into my broader topic of the foreseeable future.

One of my favorite quotes of 2014 (seems like ancient history now) was from an article in Ars Technica by Cyrus Farivar 1. It was a drone story about FBI proliferation to the tune of $5 million that occurred gradually over the period of 10 years, almost unnoticed. Farivar cites a striking quote from Neil Richards, a law professor at Washington University in St. Louis: “We don’t write laws to protect against impossible things, so when the impossible becomes possible, we shouldn’t be surprised that the law doesn’t protect against it…” I love that quote because we are continually surprised that we did not anticipate one thing or the other. Much of this surprise I believe, comes from experts who tell us that this or that won’t happen in the foreseeable future. One of these experts, Miles Brundage, a Ph.D. student at Arizona State, was quoted recently in an article in WIRED. About AI that could surpass human intelligence, Brundage said,

“At the point where we are today, no AI system is at all capable of taking over the world—and won’t be for the foreseeable future.”

There are two things that strike me about these kinds of statements. First is the obvious fact that no one can see the future in the first place, and secondly that the clear implication is, that it will happen, just not yet. It also suggests that we shouldn’t be concerned; it’s too far away. This article was about Elon Musk is open-sourcing something called OpenAI. According to Nathaniel Wood reporting for WIRED, OpenAI is deep-learning code that Musk and his investors want to share with the world, for free. This news comes on the heels of Google’s open-sourcing of their AI code called TensorFlow, immediately followed by a Facebook announcement that they would be sharing their BigSur server hardware. As the article points out, this is not all magnanimous altruism. By opening the door to formerly proprietary software or hardware folks like Musk and companies like Google and Facebook stand to gain. They gain by recruiting talent, and by exponentially increasing development through free outsourcing. A thousand people working with your code are much better than the hundreds inside your building. Here are two very important factors that folks like Brundage don’t take into consideration. First, these people are in a race and, through outsourcing or open-sourcing their stuff they are enlisting people to help them in the race. Secondly, there is that term, exponential. I use it most often when I refer to Kurzweil’s Law of Accelerating Returns. It is exactly these kinds of developments that make his prediction so believable. So maybe the foreseeable future is not that far away after all.

All this being said the future is not foreseeable, and the exponential growth in areas like VR and AI will continue. The WIRED article continues with this commentary on AI, (which we all know):

“Deep learning relies on what are called neural networks, vast networks of software and hardware that approximate the web of neurons in the human brain. Feed enough photos of a cat into a neural net, and it can learn to recognize a cat. Feed it enough human dialogue, and it can learn to carry on a conversation. Feed it enough data on what cars encounter while driving down the road and how drivers react, and it can learn to drive.”

Despite their benevolence, this is why Musk and Facebook and Google are in the race. Musk is quick to add that while his motives have an air of transparency to them, it is also true that the more people who have access to deep-learning software, the less likely that one guy will have a monopoly on it.

Musk is a smart guy. He knows that AI could be a blessing or a curse. Open sourcing is his hedge. It could be a good thing… for the foreseeable future.

 

1. Farivar, Cyrus. “DOJ Calls for Drone Privacy Policy 7 Years after FBI’s First Drone Launched.” Ars Technica. September 27, 2013. Accessed March 13, 2014. http://arstechnica.com/tech-policy/2013/09/doj-calls-for-drone-privacy-policy-7-years-after-fbis-first-drone-launched/.
Bookmark and Share

Privacy is dead. Is the cyberpunk future already here?

This week, a brief thought to provoke thought. Surprisingly it has been 30 years since William Gibson released his groundbreaking work Neuromancer, that ushered in a decade of artistry inspired by the genre known as cyberpunk. Just a few days ago Paste Magazine ran an article, “Somebody’s Watching Me; Cyberpunk 30 Years On, and the Warnings We Didn’t Heed.” Therein, writer Brian Chidester delineates the fascinating influence of Gibson’s work on the music of the day as well as the ripples it continues to send into the present.

With my futurist, sci-fi, cyberpunk leanings, I was caught up with the observation of how much of Gibson’s, “…near-future where computer technology was woven into our DNA—where a virtual data sphere played the dominate role in the human interface,” is already here—and we didn’t notice—or as Chidester notes, “…quietly came to pass.”

The music connection is deep and profound but it is also intertwined with the events of the days and the decades to follow. From DARPA’s creation of the internet, to post-9/11 paranoia, the Patriot Act, WikiLeaks, Edward Snowden, Google, Twitter and Facebook, to the ubiquitous storage of cookies and individual user preferences (most of which are freely—even blithely—given), we, “…have, in essence, created business models that are a dream come true for the CIAs, FBIs and NSAs of the world.”

Yet perhaps more chilling than where we are, is how we got here.

“Google, Twitter and Facebook, lauded as broadening the scope of human potential, in fact, built algorithms to drive us to predictable results. Cookies store information on individual user preferences. They have, in essence, created business models that are a dream come true for the CIAs, FBIs and NSAs of the world.

Facebook has nearly a billion users, with tons of personal data on each one, proving that plenty of individuals are willing to provide private information to get something that is free and fun. Simply put: We’ve allowed ourselves to be smitten. The computer is now miniaturized, or, as Bruce Sterling predicted, ‘adorable.’ Christopher Shin, the engineer of Cellebrite, a device that aids the U.S. government in collecting information from cellular users, contends that the iPhone holds more personal information than any other device on the market.”

So if we can go from cyberpunk, science fiction, to present day future in 30 years, given the exponential growth of technology, were will be be smitten next: genetic engineering, transhumanism, synthetic biology?

Chidester concludes:

“If we stop to ask how we got here, we may look back and find the signs embedded in cyberpunk literature of 20-30 years prior. We may then wonder how we might better have heeded its warnings. But it is too late. Privacy, under the current paradigm, is essentially dead.”

What other cherished possession will be the next to fall?  Or have they all already fallen?

Bookmark and Share