Tag Archives: virtual reality

An AI as President?

 

Back on May 19th, before I went on holiday, I promised to comment on an article that appeared that week advocating that we would better off with artificial intelligence (AI) as President of the United States. Joshua Davis authored the piece: Hear me out: Let’s Elect
An AI As President, for the business section of WIRED  online. Let’s start out with a few quotes.

“An artificially intelligent president could be trained to
maximize happiness for the most people without infringing on civil liberties.”

“Within a decade, tens of thousands of people will entrust their daily commute—and their safety—to an algorithm, and they’ll do it happily…The increase in human productivity and happiness will be enormous.”

Let’s start with the word happiness. What is that anyway? I’ve seen it around in several discourses about the future, that somehow we have to start focusing on human happiness above all things, but what makes me happy and what makes you happy may very well be different things. Then there is the frightening idea that it is the job of government to make us happy! There are a lot of folks out there that the government should give us a guaranteed income, pay for our healthcare, and now, apparently, it should also make us happy. If you haven’t noticed from my previous blogs, I am not a progressive. If you believe that government should undertake the happy challenge, you had better hope that their idea of happiness coincides with your own. Gerd Leonhard, a futurist whose work I respect, says that there are two types of happiness: first is hedonic (pleasure) which tends to be temporary, and the other is a eudaimonic happiness which he defines as human flourishing.1 I prefer the latter as it is likely to be more meaningful. Meaning is rather crucial to well-being and purpose in life. I believe that we should be responsible for our happiness. God help us if we leave it up to a machine.

This brings me to my next issue with this insane idea. Davis suggests that by simply not driving, there will be an enormous increase in human productivity and happiness. According to the website overflow data,

“Of the 139,786,639 working individuals in the US, 7,000,722, or about 5.01%, use public transit to get to work according to the 2013 American Communities Survey.”

Are those 7 million working individuals who don’t drive happier and more productive? The survey should have asked, but I’m betting the answer is no. Davis also assumes that everyone will be able to afford an autonomous vehicle. Maybe providing every American with an autonomous vehicle is also the job of the government.

Where I agree with Davis is that we will probably abdicate our daily commute to an algorithm and do it happily. Maybe this is the most disturbing part of his argument. As I am fond of saying, we are sponges for technology, and we often adopt new technology without so much as a thought toward the broader ramifications of what it means to our humanity.

There are sober people out there advocating that we must start to abdicate our decision-making to algorithms because we have too many decisions to make. They are concerned that the current state of affairs is simply too painful for humankind. If you dig into the rationale that these experts are using, many of them are motivated by commerce. Already Google and Facebook and the algorithms of a dozen different apps are telling you what you should buy, where you should eat, who you should “friend” and, in some cases, what you should think. They give you news (real or fake), and they tell you this is what will make you happy. Is it working? Agendas are everywhere, but very few of them have you in the center.

As part of his rationale, Davis cites the proven ability for AI to beat the world’s Go champions over and over and over again, and that it can find melanomas better than board-certified dermatologists.

“It won’t be long before an AI is sophisticated enough to
implement a core set of beliefs in ways that reflect changes in the world. In other words, the time is coming when AIs will have better judgment than most politicians.”

That seems like grounds to elect one as President, right? In fact, it is just another way for us to take our eye off the ball, to subordinate our autonomy to more powerful forces in the belief that technology will save us and make us happier.

Back to my previous point, that’s what is so frightening. It is precisely the kind of argument that people buy into. What if the new AI President decides that we will all be happier if we’re sedated, and then using executive powers makes it law? Forget checks and balances, since who else in government could win an argument against an all-knowing AI? How much power will the new AI President give to other algorithms, bots, and machines?

If we are willing to give up the process of purposeful work to make a living wage in exchange for a guaranteed income, to subordinate our decision-making to have “less to think about,” to abandon reality for a “good enough” simulation, and believe that this new AI will be free of the special interests who think they control it, then get ready for the future.

1. Leonhard, Gerd. Technology vs. Humanity: The Coming Clash between Man and Machine. p112, United Kingdom: Fast Future, 2016. Print.

Bookmark and Share

Disruption. Part 1

 

We often associate the term disruption with a snag in our phone, internet or other infrastructure service, but there is a larger sense of the expression. Technological disruption refers the to phenomenon that occurs when innovation, “…significantly alters the way that businesses operate. A disruptive technology may force companies to alter the way that they approach their business, risk losing market share or risk becoming irrelevant.”1

Some track the idea as far back as Karl Marx who influenced economist Joseph Schumpeter to coin the term “creative destruction” in 1942.2 Schumpeter described that as the “process of industrial mutation that incessantly revolutionizes the economic structure from within, incessantly destroying the old one, incessantly creating a new one.” But it was, “Clayton M. Christensen, a Harvard Business School professor, that described it’s current framework. “…a disruptive technology is a new emerging technology that unexpectedly displaces an established one.”3

OK, so much for the history lesson. How does this affect us? Historical examples of technological disruption go back to the railroads, and the mass produced automobile, technologies that changed the world. Today we can point to the Internet as possibly this century’s most transformative technology to date. However, we can’t ignore the smartphone, barely ten years old which has brought together a host of converging technologies substantially eliminating the need for the calculator, the dictaphone, land lines, the GPS box that you used to put on your dashboard, still and video cameras, and possibly your privacy. With the proliferation of apps within the smartphone platform, there are hundreds if not thousands of other “services” that now do work that we had previously done by other means. But hold on to your hat. Technological disruption is just getting started. For the next round, we will see an increasingly pervasive Internet of Things (IoT), advanced robotics, exponential growth in Artificial Intelligence (AI) and machine learning, ubiquitous Augmented Reality (AR), Virtual Reality (VR), Blockchain systems, precise genetic engineering, and advanced renewable energy systems. Some of these such as Blockchain Systems will have potentially cataclysmic effects on business. Widespread adoption of blockchain systems that enable digital money would eliminate the need for banks, credit card companies, and currency of all forms. How’s that for disruptive? Other innovations will just continue to transform us and our behaviors. Over the next few weeks, I will discuss some of these potential disruptions and their unique characteristics.

Do you have any you would like to add?

1 http://www.investopedia.com/terms/d/disruptive-technology.asp#ixzz4ZKwSDIbm

2 http://www.investopedia.com/terms/c/creativedestruction.asp

3 http://www.intelligenthq.com/technology/12-disruptive-technologies/

See also: Disruptive technologies: Catching the wave, Journal of Product Innovation Management, Volume 13, Issue 1, 1996, Pages 75-76, ISSN 0737-6782, http://dx.doi.org/10.1016/0737-6782(96)81091-5.
(http://www.sciencedirect.com/science/article/pii/0737678296810915)

Bookmark and Share

Step inside The Lightstream Chronicles

Some time ago I promised to step inside one of the scenes from The Lighstream Chronicles. Today, to commemorate the debut of Season 5—that goes live today—I’m going to deliver on that promise, partially.

 

Background

The notion started after giving my students a tour of the Advanced Computing Center for Arts and Design (ACCAD)s motion-capture lab. We were a discussing VR, and sadly, despite all the recent hype, very few of us—including me—had never experienced state-of-the-art Virtual Reality. In that tour, it occurred to me that through the past five years of continuous work on my graphic novel, a story built entirely in CG, I have a trove of scenes and scenarios that I could in effect step into. Of course, it is not that simple, as I have discovered this summer working with ACCADs animation specialist Vita Berezina-Blackburn. It turns out that my extreme high-resolution images are not ideally compatible with the Oculus pipeline.

The idea was, at first, a curiosity for me, but it became quickly apparent that there was another level of synergy with my work in guerrilla futures, a flavor of design fiction.

Design fiction, my focus of study, centers on the idea that, through prototypes and future narratives we can engage people in thinking about possible futures, discuss and debate them and instill the idea of individual agency in shaping them. Unfortunately, too much design fiction ends up in the theoretical realm within the confines of the art gallery, academic conferences or workshops. The instances are few where the general public receives a future experience to contemplate and consider. Indeed, it has been something of a lament for me that my work in future fiction through the graphic novel, can be experienced as pure entertainment without acknowledging the deeper issues of its socio-techno themes. At the core of experiential design fiction introduced by Stewart Candy (2010) is the notion that future fiction can be inserted into everyday life whether the recipient has asked for them or not. The technique is one method of making the future real enough for us to ask whether this is the future we want and if not what might we do about it now.

Through my recent meanderings with VR, I see that this idea of immersive futures could be an incredibly powerful method of infusing these experiences.

The scene from Season 1 that I selected for this test.
The scene from Season 1 that I selected for this test.

 

About the video
This video is a test. We had no idea what we would get after I stripped down a scene from Season 1. Then we had a couple of weeks of trial and error re-making my files to be compatible with the system. Since one of the things that separate The Lightstream Chronicles from your average graphic novel/webcomic is the fact that you can zoom in 5x to inspect every detail, it is not uncommon, for example for me to have more than two hundred 4K textures in any given scene. It also allows me as the “director” to change it up and dolly in or out to focus on a character or object within a scene without a resulting loss in resolution. To me, it’s one of the drawbacks in many video games of getting in and inspecting a resident artifact. They usually start to “break up” into pixels the closer you get. However, in a real-time environment, you have to make concessions, at least for now, to make your textures render faster.

For this test, we didn’t apply all two hundred textures, just some essentials. For example the cordial glasses, the liquid in the bottle and the array of floating transparent files that hover over Techman’s desk. We did apply the key texture that defines the environment and that is the rusty, perforated metal wall that encloses Techman’s “safe-room” and protects it from eavesdropping. There are lots of other little glitches beyond unassigned textures, such as intersecting polygons and dozens of lighting tweaks that make this far from prime time.

In the average VR game, you move your controller forward through space while you are either seated or standing. Either way, in most cases you are stationary. What distinguishes this from most VR experiences is that I can physically walk through the scene.In this test, we were in the ACCAD motion capture lab.

Wearing the Oculus in the MoCap lab.
Wearing the Oculus in the MoCap lab while Lakshika manages the tether.

I’m sure you have seen pictures of this sort of thing before where characters strap on sensors to “capture their motions” and translate them to virtual CG characters. This was the space in which I was working. It has boundaries, however. So I had to obtain those boundaries, in scale to my scene so that I could be sure that the room and the characters were within the area of the lab. Dozens of tracking devices around the lab read sensors on the Oculus headset and ensure that once I strap it on, I can move freely within the limits of virtual space, and it would relate my movements to the context of the virtual scene.

Next week I’ll be going back into the lab with a new scene and take a look at Kristin Broulliard and Keiji in their exchange from episode 97 (page) Season 3.

Next time.
Next time.

Respond, reply, comment. Enjoy.

 

Bookmark and Share

Adapt or plan? Where do we go from here?

I just returned from Nottingham, UK where I presented a paper for Cumulus 16, In This Place. The paper was entitled Design Fiction: A Countermeasure For Technology Surprise. An Undergraduate Proposal. My argument hinged on the idea that students needed to start thinking about our technosocial future. Design fiction is my area of research, but if you were inclined to do so, you could probably choose a variant methodology to provoke discussion and debate about the future of design, what designers do, and their responsibility as creators of culture. In January, I had the opportunity to take an initial pass at such a class. The experiment was a different twist on a collaborative studio where students from the three traditional design specialties worked together on a defined problem. The emphasis was on collaboration rather than the outcome. Some students embraced this while others pushed back. The push-back came from students fixated on building a portfolio of “things” or “spaces” or “visual communications“ so that they could impress prospective employers. I can’t blame them for that. As educators, we have hammered the old paradigm of getting a job at Apple or Google, or (fill in the blank) as the ultimate goal of undergraduate education. But the paradigm is changing and the model of a designer as the maker of “stuff” is wearing thin.

A great little polemic from Cameron Tonkinwise recently appeared that helped to articulate this issue. He points the finger at interaction design scholars and asks why they are not writing about or critiquing “the current developments in the world of tech.” He wonders whether anyone is paying attention. As designers and computer scientists we are feeding a pipeline of more apps with minimal viability, with seemingly no regard for the consequences on social systems, and (one of my personal favorites) the behaviors we engender through our designs.

I tell my students that it is important to think about the future. The usual response is, “We do!” When I drill deeper, I find that their thoughts revolve around getting a job, making a living, finding a home, and a partner. They rarely include global warming, economic upheavals, feeding the world, natural disasters, etc. Why? These issues they view as beyond their control. We do not choose these things; they happen to us. Nevertheless, these are precisely the predicaments that need designers. I would argue these concerns are far more important than another app to count my calories or select the location for my next sandwich.

There is a host of others like Tonkinwise that see that design needs to refocus, but often it seems like there are are a greater number that blindly plod forward unaware of the futures they are creating. I’m not talking about refocusing designers to be better at business or programming languages; I’m talking about making designers more responsible for what they design. And like Tonkinwise, I agree that it needs to start with design educators.

Bookmark and Share

The nature of the unpredictable.

 

Following up on last week’s post, I confessed some concern about technologies that progress too quickly and combine unpredictably.

Stewart Brand introduced the 1968 Whole Earth Catalog with, “We are as gods and might as well get good at it.”1 Thirty-two years later, he wrote that new technologies such as computers, biotechnology and nanotechnology are self-accelerating, that they differ from older, “stable, predictable and reliable,” technologies such as television and the automobile. Brand states that new technologies “…create conditions that are unstable, unpredictable and unreliable…. We can understand natural biology, subtle as it is because it holds still. But how will we ever be able to understand quantum computing or nanotechnology if its subtlety keeps accelerating away from us?”2. If we combine Brand’s concern with Kurzweil’s Law of Accelerating Returns and the current supporting evidence exponentially, as the evidence supports, will it be as Brand suggests unpredictable?

Last week I discussed an article from WIRED Magazine on the VR/MR company Magic Leap. The author writes,

“Even if you’ve never tried virtual reality, you probably possess a vivid expectation of what it will be like. It’s the Matrix, a reality of such convincing verisimilitude that you can’t tell if it’s fake. It will be the Metaverse in Neal Stephenson’s rollicking 1992 novel, Snow Crash, an urban reality so enticing that some people never leave it.”

And it will be. It is, as I said last week, entirely logical to expect it.

We race toward these technologies with visions of mind-blowing experiences or life-changing cures, and usually, we imagine only the upside. We all too often forget the human factor. Let’s look at some other inevitable technological developments.
• Affordable DNA testing will tell you your risk of inheriting a disease or debilitating condition.
• You can ingest a pill that tells your doctor, or you in case you forgot, that you took your medicine.
• Soon we will have life-like robotic companions.
• Virtual reality is affordable, amazingly real and completely user-friendly.

These are simple scenarios because they will likely have aspects that make them even more impressive, more accessible and more profoundly useful. And like most technological developments, they will also become mundane and expected. But along with them come the possibility of a whole host of unintended consequences. Here are a few.
• The government’s universal healthcare requires that citizens have a DNA test before you qualify.
• It monitors whether you’ve taken your medication and issues a fine if you don’t, even if you don’t want your medicine.
• A robotic, life-like companion can provide support and encouragement, but it could also be your outlet for violent behavior or abuse.
• The virtual world is so captivating and pleasurable that you don’t want to leave, or it gets to the point where it is addicting.

It seems as though whenever we involve human nature, we set ourselves up for unintended consequences. Perhaps it is not the nature of technology to be unpredictable; it is us.

1. Brand, Stewart. “WE ARE AS GODS.” The Whole Earth Catalog, September 1968, 1-58. Accessed May 04, 2015. http://www.wholeearth.com/issue/1010/article/195/we.are.as.gods.
2. Brand, Stewart. “Is Technology Moving Too Fast? Self-Accelerating Technologies-Computers That Make Faster Computers, For Example – May Have a Destabilizing Effect on .Society.” TIME, 2000
Bookmark and Share

Design fiction. I want to believe.

 

I have blogged in the past about logical succession. When it comes to creating realistic design fiction narrative, there needs to be a sense of believability. Coates1 calls this “plausible reasoning.”, “[…]putting together what you know to create a path leading to one or several new states or conditions, at a distance in time.” In other words, for the audience to suspend their disbelief, there has to be a basic understanding of how we got here. If you depict something that is too fantastic, your audience won’t buy it, especially if you are trying to say that, “This could happen.”

“When design fictions are conceivable and realistically executed they carry a greater potential for making an impact and focusing discussion and debate around these future scenarios.”2

In my design futures collaborative studio, I ask students to do a rigorous investigation of future technologies, the ones that are on the bleeding edge. Then I want them to ask, “What if?” It is easier said than done. Particularly because of technological convergence, the way technologies merge with other technologies to form heretofore unimagined opportunities.

There was an article this week in Wired Magazine concerning a company called Magic Leap. They are in the MR business, mixed reality as opposed to virtual reality. With MR, the virtual imagery happens within the space you’re in—in front of your eyes—rather than in an entirely virtual space. The demo from Wired’s site is pretty convincing. The future of MR and VR, for me, are easy to predict. Will it get more realistic? Yes. Will it get cheaper, smaller, and ubiquitous? Yes. At this point, a prediction like this is entirely logical. Twenty-five years ago it would not have been as easy to imagine.

As the Wired article states,

“[…]the arrival of mass-market VR wasn’t imminent.[…]Twenty-five years later a most unlikely savior emerged—the smartphone! Its runaway global success drove the quality of tiny hi-res screens way up and their cost way down. Gyroscopes and motion sensors embedded in phones could be borrowed by VR displays to track head, hand, and body positions for pennies. And the processing power of a modern phone’s chip was equal to an old supercomputer, streaming movies on the tiny screen with ease.”

To have predicted that VR would be where it is today with billions of dollars pouring into fledgling technologies and realistic, and utterly convincing demonstrations would have been illogical. It would have been like throwing a magnet into a bucket of nails, rolling it around and guessing which nails would end up coming out attached.

What is my point? I think it is important to remind ourselves that things will move blindingly fast particularly when companies like Google and Facebook are throwing money at them. Then, the advancement of one only adds to the possibilities of the next iteration possibly in ways that no one can predict. As VR or MR merges with biotech or artificial reality, or just about anything else you can imagine, the possibilities are endless.

Unpredictable technology makes me uncomfortable. Next week I’ll tell you why.

 

  1. Coates, J.F., 2010. The future of foresight—A US perspective. Technological Forecasting & Social Change 77, 1428–1437.
  2. E. Scott Denison. “Timed-release Design Fiction: A Digital Online Methodology to Provoke Reflection on our Socio- Technological Future.”  Edited by Michal Derda Nowakowski. ISBN: 978-1-84888-427-4 Interdisciplinary.net.
Bookmark and Share

Powerful infant.

In previous blogs (such as this one), I have discussed the subject of virtual reality. Yesterday, I tried it. The motivation for my visit to The Advanced Computing Center for the Arts and Design (ACCAD), Ohio State’s cutting-edge technology and arts center, was a field trip for my junior Collaborative Studio design students. Their project this semester is to design a future system that uses emerging technologies. It is hard to imagine that in the near-future VR will be commonplace. We stepped inside the a large, empty performance stage rigged with a dozen motion capture cameras that could track your movements throughout virtual space. We looked at an experimental animation in which we could stand amidst the characters and another work-in-progress that allowed us to step inside a painting. It wasn’t my first time in a Google cardboard device where I could look around at a 360-degree world (sensed by my phone’s gyroscope), but on an empty stage where you could walk amongst virtual characters, the experience took on a new dimension—literally. I found myself concerned about bumping into things that weren’t there and even getting a bit dizzy. (I did not let on in front of my students).

I immediately saw an application for The Lightstream Chronicles and realized that I could load up one of my scenes from the graphic novel, bring it over to ACCAD’s mocap studio and step into this virtual world that I have created. I build all of my scenes (including architecture) to scale, furnish the rooms and interiors and provide for full 360º viewing. Building sets this way allows me to revisit them at any time, follow my characters around or move the camera to get a better angle without having to add walls that I might not have anticipated using. After the demo, I was pretty excited. It became apparent that this technology will enable me to see what my characters see, and stand beside them. It’s a bit mind-blowing. Now the question becomes which scene to use. Any ideas?

Clearly VR is in its infancy, but it is a very powerful infant. The future seems exciting, and I can see why people can get caught up in what the promises could be. Of course, I have to be the one to wonder at what this powerful infant will grow up to be.

Bookmark and Share

Logical succession, the final installment.

For the past couple of weeks, I have been discussing the idea posited by Ray Kurzweil, that we will have linked our neocortex to the Cloud by 2030. That’s less than 15 years, so I have been asking how that could come to pass with so many technological obstacles in the way. When you make a prediction of that sort, I believe you need a bit more than faith in the exponential curve of “accelerating returns.”

This week I’m not going to take issue with an enormous leap forward in the nanobot technology to accomplish such a feat. Nor am I going to question the vastly complicated tasks of connecting to the neocortex and extracting anything coherent, but also assembling memories, and consciousness and in turn, beaming it to the Cloud. Instead, I’m going to pose the question of, “Why we would want to do this in the first place?”

According to Kurzweil, in a talk last year at Singularity University,

“We’re going to be funnier. We’re going to be sexier. We’re going to be better at expressing loving sentiment…” 1

Another brilliant futurist, and friend of Ray, Peter Diamandis includes these additional benefits:

• Brain to Brain Communication – aka Telepathy
• Instant Knowledge – download anything, complex math, how to fly a plane, or speak another language
• Access More Powerful Computing – through the Cloud
• Tap Into Any Virtual World – no visor, no controls. Your neocortex thinks you are there.
• And more, including and extended immune system, expandable and searchable memories, and “higher-order existence.”2

As Kurzweil explains,

“So as we evolve, we become closer to God. Evolution is a spiritual process. There is beauty and love and creativity and intelligence in the world — it all comes from the neocortex. So we’re going to expand the brain’s neocortex and become more godlike.”1

The future sounds quite remarkable. My issue lies with Koestler’s “ghost in the machine,” or what I call humankind’s uncanny ability to foul things up. Diamandis’ list could easily spin this way:

  • Brain-To-Brain hacking – reading others thoughts
  • Instant Knowledge – to deceive, to steal, to subvert, or hijack.
  • Access to More Powerful Computing – to gain the advantage or any of the previous list.
  • Tap Into Any Virtual World – experience the criminal, the evil, the debauched and not go to jail for it.

You get the idea. Diamandis concludes, “If this future becomes reality, connected humans are going to change everything. We need to discuss the implications in order to make the right decisions now so that we are prepared for the future.”

Nevertheless, we race forward. We discovered this week that “A British researcher has received permission to use a powerful new genome-editing technique on human embryos, even though researchers throughout the world are observing a voluntary moratorium on making changes to DNA that could be passed down to subsequent generations.”3 That would be CrisprCas9.

It was way back in 1968 that Stewart Brand introduced The Whole Earth Catalog with, “We are as gods and might as well get good at it.”

Which lab is working on that?

 

1. http://www.huffingtonpost.com/entry/ray-kurzweil-nanobots-brain-godlike_us_560555a0e4b0af3706dbe1e2
2. http://singularityhub.com/2015/10/12/ray-kurzweils-wildest-prediction-nanobots-will-plug-our-brains-into-the-web-by-the-2030s/
3. http://www.nytimes.com/2016/02/02/health/crispr-gene-editing-human-embryos-kathy-niakan-britain.html?_r=0
Bookmark and Share

A facebook of a different color.

The tech site Ars Technica recently ran an article on the proliferation of a little-known app called Facewatch. According to the articles writer Sebastian Anthony, “Facewatch is a system that lets retailers, publicans, and restaurateurs easily share private CCTV footage with the police and other Facewatch users. In theory, Facewatch lets you easily report shoplifters to the police, and to share the faces of generally unpleasant clients, drunks, etc. with other Facewatch users.” The idea is that retailers or officials can look out for these folks and either keep an eye on them or just ask them to leave. The system, in use in the UK, appears to have a high rate of success.

 

The story continues. Of course, all technologies eventually converge, so now you don’t have to “keep and eye out” for ner-do-wells your CCTV can do it for you. NeoFace from NEC works with the Facewatch list to do the scouting for you. According to NECs website: “NEC’s NeoFace Watch solution is specifically designed to integrate with existing surveillance systems by extracting faces in real time… and matching against a watch list of individuals.” In this case, it would be the Facewatch database. Ars’ Anthony, makes this connection: “In the film Minority Report, people are rounded up by the Precrime police agency before they actually commit the crime…with Facewatch, and you pretty much have the same thing: a system that automatically tars people with a criminal brush, irrespective of dozens of important variables.”

Anthony points out that,

“Facewatch lets you share ‘subjects of interest’ with other Facewatch users even if they haven’t been convicted. If you look at the shop owner in a funny way, or ask for the service charge to be removed from your bill, you might find yourself added to the ‘subject of interest’ list.”

The odds of an innocent being added to the watchlist are quite good. Malicious behavior aside, you could be logged as you wander past a government protest, forget your PIN number too many times at the ATM, or simply look too creepy in your Ray Bans and hoody.

The story underscores a couple of my past rants. First, we don’t make laws to protect against things that are impossible, so when the impossible happens, we shouldn’t be surprised that there isn’t a law to protect against it.1 It is another red flag that technology is moving, too fast and as it converges with other technologies it becomes radically unpredictable. Second, that technology moves faster than politics, moves faster than policy, and often faster than ethics.2

There are a host personal apps, many which are available to our iPhones or Androids that are on the precarious line between legal and illegal, curious and invasive. And there are more to come.

 

1 Quoting Selinger from Wood, David. “The Naked Future — A World That Anticipates Your Every Move.” YouTube. YouTube, 15 Dec. 2013. Web. 13 Mar. 2014.
2. Quoting Richards from Farivar, Cyrus. “DOJ Calls for Drone Privacy Policy 7 Years after FBI’s First Drone Launched.” Ars Technica. September 27, 2013. Accessed March 13, 2014. http://arstechnica.com/tech-policy/2013/09/doj-calls-for-drone-privacy-policy-7-years-after-fbis-first-drone-launched/.
Bookmark and Share

The foreseeable future.

From my perspective, the two most disruptive technologies of the next ten years will be a couple of acronyms: VR and AI. Virtual Reality will transform the way people learn, and their diversions. It will play an increasing role in entertainment and gaming to the extent that many will experience some confusion and conflict with actual reality. Make sure you see last week’s blog for more on this. Between VR and AI so much is happening that these could easily outnumber a host of other topics to discuss on this site next year. Today, I’ll begin the discussion with AI, but both technologies fall into my broader topic of the foreseeable future.

One of my favorite quotes of 2014 (seems like ancient history now) was from an article in Ars Technica by Cyrus Farivar 1. It was a drone story about FBI proliferation to the tune of $5 million that occurred gradually over the period of 10 years, almost unnoticed. Farivar cites a striking quote from Neil Richards, a law professor at Washington University in St. Louis: “We don’t write laws to protect against impossible things, so when the impossible becomes possible, we shouldn’t be surprised that the law doesn’t protect against it…” I love that quote because we are continually surprised that we did not anticipate one thing or the other. Much of this surprise I believe, comes from experts who tell us that this or that won’t happen in the foreseeable future. One of these experts, Miles Brundage, a Ph.D. student at Arizona State, was quoted recently in an article in WIRED. About AI that could surpass human intelligence, Brundage said,

“At the point where we are today, no AI system is at all capable of taking over the world—and won’t be for the foreseeable future.”

There are two things that strike me about these kinds of statements. First is the obvious fact that no one can see the future in the first place, and secondly that the clear implication is, that it will happen, just not yet. It also suggests that we shouldn’t be concerned; it’s too far away. This article was about Elon Musk is open-sourcing something called OpenAI. According to Nathaniel Wood reporting for WIRED, OpenAI is deep-learning code that Musk and his investors want to share with the world, for free. This news comes on the heels of Google’s open-sourcing of their AI code called TensorFlow, immediately followed by a Facebook announcement that they would be sharing their BigSur server hardware. As the article points out, this is not all magnanimous altruism. By opening the door to formerly proprietary software or hardware folks like Musk and companies like Google and Facebook stand to gain. They gain by recruiting talent, and by exponentially increasing development through free outsourcing. A thousand people working with your code are much better than the hundreds inside your building. Here are two very important factors that folks like Brundage don’t take into consideration. First, these people are in a race and, through outsourcing or open-sourcing their stuff they are enlisting people to help them in the race. Secondly, there is that term, exponential. I use it most often when I refer to Kurzweil’s Law of Accelerating Returns. It is exactly these kinds of developments that make his prediction so believable. So maybe the foreseeable future is not that far away after all.

All this being said the future is not foreseeable, and the exponential growth in areas like VR and AI will continue. The WIRED article continues with this commentary on AI, (which we all know):

“Deep learning relies on what are called neural networks, vast networks of software and hardware that approximate the web of neurons in the human brain. Feed enough photos of a cat into a neural net, and it can learn to recognize a cat. Feed it enough human dialogue, and it can learn to carry on a conversation. Feed it enough data on what cars encounter while driving down the road and how drivers react, and it can learn to drive.”

Despite their benevolence, this is why Musk and Facebook and Google are in the race. Musk is quick to add that while his motives have an air of transparency to them, it is also true that the more people who have access to deep-learning software, the less likely that one guy will have a monopoly on it.

Musk is a smart guy. He knows that AI could be a blessing or a curse. Open sourcing is his hedge. It could be a good thing… for the foreseeable future.

 

1. Farivar, Cyrus. “DOJ Calls for Drone Privacy Policy 7 Years after FBI’s First Drone Launched.” Ars Technica. September 27, 2013. Accessed March 13, 2014. http://arstechnica.com/tech-policy/2013/09/doj-calls-for-drone-privacy-policy-7-years-after-fbis-first-drone-launched/.
Bookmark and Share