Tag Archives: futurist

Thought leaders and followers.

 

Next week, the World Future Society is having its annual conference. As a member, I really should be going, but I can’t make it this year. The future is a dicey place. There are people convinced that we can create a utopia, some are warning of dystopia, and the rest are settled somewhere in between. Based on promotional emails that I have received, one of the topics is “The Future of Evolution and Human Nature.” According to the promo,

“The mixed emotions and cognitive dissonance that occur inside each of us also scale upward into our social fabric: implicit bias against new perspectives, disdain for people who represent “other”, the fear of a new world that is not the same as it has always been, and the hopelessness that we cannot solve our problems. We know from experience that this negativity, hatred, fear, and hopelessness is not what it seems like on the surface: it is a reaction to change. And indeed we are experiencing a period of profound change.” There is a larger story of our evolution that extends well beyond the negativity and despair that feels so real to us today. It’s a story of redefining and building infrastructure around trust, hope and empathy. It’s a story of accelerating human imagination and leveraging it to create new and wondrous things.

It is a story of technological magic that will free us from scarcity and ensure a prosperous lifestyle for everyone, regardless of where they come from.”

Woah. I have to admit, this kind of talk that makes me uncomfortable. Are fear of a new world, negativity, hatred, and fear reactions to change? Will technosocial magic solve all our problems? This type of rhetoric sounds more like a movement than a conference that examines differing views on an important topic. It would seem to frame caution as fear and negativity, and then we throw in that hyperbole hatred. Does it sound like the beginning of an agenda with a framework that characterizes those who disagree as haters? I think it does. It’s a popular tactic.

These views do not by any means reflect the opinions of the entire WFS membership, but there is a significant contingent, such as the folks from Humanity+, which hold the belief that we can fix human evolution—even human nature—with technology. For me, this is treading into thorny territory.

What is human nature? Merriam-Webster online provides this definition:

“[…]the nature of humans; especially: the fundamental dispositions and traits of humans.” Presumably, we include good traits and bad traits. Will our discussions center on which features to fix and which to keep or enhance? Who will decide?

What about the human condition? Can we change this? Should we? According to Wikipedia,

“The human condition is “the characteristics, key events, and situations which compose the essentials of human existence, such as birth, growth, emotionality, aspiration, conflict, and mortality.” This is a very broad topic which has been and continues to be pondered and analyzed from many perspectives, including those of religion, philosophy, history, art, literature, anthropology, sociology, psychology, and biology.”

Clearly, there are a lot of different perspectives to be represented here. Do we honestly believe that technology will answer them all sufficiently? The theme of the upcoming WFS conference is “A Brighter Future IS Possible.” No doubt there will be a flurry of technosocial proposals presented there, and we should not put them aside as a bunch of fringe futurists. These voices are thought-leaders. They lead thinking. Are we thinking? Are we paying attention? If so, then it’s time to discuss and debate these issues, or others will decides without us.

Bookmark and Share

Future Shock

 

As you no doubt have heard, Alvin Toffler died on June 27, 2016, at the age of 87. Mr. Toffler was a futurist. The book for which he is best known, Future Shock was a best seller in 1970 and was considered required college reading at the time. In essence, Mr. Toffler said that the future would be a disorienting place if we just let it happen. He said we need to pay attention.

Credit: Susan Wood/Getty Images from The New York Times 2016
Credit: Susan Wood/Getty Images from The New York Times 2016

This week, The New York Times published an article entitled Why We Need to Pick Up Alvin Toffler’s Torch by Farhad Manjoo. As Manjoo observes, at one time (the 1950s, 1960s, and 1970s), the study of foresight and forecasting was important stuff that governments and corporations took seriously. Though I’m not sure I agree with Manjoo’s assessment of why that is no longer the case, I do agree that it is no longer the case.

“In many large ways, it’s almost as if we have collectively stopped planning for the future. Instead, we all just sort of bounce along in the present, caught in the headlights of a tomorrow pushed by a few large corporations and shaped by the inescapable logic of hyper-efficiency — a future heading straight for us. It’s not just future shock; we now have future blindness.”

At one time, this was required reading.
At one time, this was required reading.

When I attended the First International Conference on Anticipation in 2015, I was pleased to discover that the blindness was not everywhere. In fact, many of the people deeply rooted in the latest innovations in science and technology, architecture, social science, medicine, and a hundred other fields are very interested in the future. They see an urgency. But most governments don’t and I fear that most corporations, even the tech giants are more interested in being first with the next zillion-dollar technology than asking if that technology is the right thing to do. Even less they are asking what repercussions might flow from these advancements and what are the ramifications of today’s decision making. We just don’t think that way.

I don’t believe that has to be the case. The World Future Society for example at their upcoming conference in Washington, DC will be addressing the idea of futures studies as a requirement for high school education. They ask,

“Isn’t it surprising that mainstream education offers so little teaching on foresight? Were you exposed to futures thinking when you were in high school or college? Are your children or grandchildren taught how decisions can be made using scenario planning, for example? Or take part in discussions about what alternative futures might look like? In a complex, uncertain world, what more might higher education do to promote a Futurist Mindset?”

It certainly needs to be part of design education, and it is one of the things I vigorously promote at my university.

As Manjoo sums up in his NYT article,

“Of course, the future doesn’t stop coming just because you stop planning for it. Technological change has only sped up since the 1990s. Notwithstanding questions about its impact on the economy, there seems no debate that advances in hardware, software and biomedicine have led to seismic changes in how most of the world lives and works — and will continue to do so.

Yet, without soliciting advice from a class of professionals charged with thinking systematically about the future, we risk rushing into tomorrow headlong, without a plan.”

And if that isn’t just crazy, at the very least it’s dangerous.

 

 

Bookmark and Share

“At a certain point…”

 

A few weeks ago Brian Barrett of WIRED magazine reported on an “NEW SURVEILLANCE SYSTEM MAY LET COPS USE ALL OF THE CAMERAS.” According to the article,

“Computer scientists have created a way of letting law enforcement tap any camera that isn’t password protected so they can determine where to send help or how to respond to a crime.”

Barrett suggests that America has 30 million surveillance cameras out there. The above sentence, for me, is loaded. First of all, as with most technological advancements, they are always couched in the most benevolent form. These scientists are going to help law enforcement send help or respond to crimes. This is also the argument that the FBI used to try to force Apple to provide a backdoor to the iPhone. It was for the common good.

If you are like me, you immediately see a giant red flag waving to warn us of the gaping possibility for abuse. However, we can take heart to some extent. The sentence mentioned above also limits law enforcement access to, “any camera that isn’t password protected.” Now the question is: What percentage of the 30 million cameras are password protected? Does it include, for example, more than kennel cams or random weather cams? Does it include the local ATM, traffic, and other security cameras? The system is called CAM2.

“…CAM2 reveals the location and orientation of public network cameras, like the one outside your apartment.”

It can aggregate the cameras in a given area and allow law enforcement to access them. Hmm.

Last week I teased that some of the developments that I reserved for 25, 50 or even further into the future, through my graphic novel The Lightstream Chronicles, are showing signs of life in the next two or three years. A universal “cam” system like this is one of them; the idea of ubiquitous surveillance or the mesh only gets stronger with more cameras. Hence the idea behind my ubiquitous surveillance blog. If there is a system that can identify all of the “public network” cams, how far are we from identifying all of the “private network” cams? How long before these systems are hacked? Or, in the name of national security, how might these systems be appropriated? You may think this is the stuff of sci-fi, but it is also the stuff of design-fi, and design-fi, as I explained last week, is intended to make us think; about how these things play out.

In closing, WIRED’s Barrett raised the issue of the potential for abusing systems such as CAM2 with Gautam Hans, policy counsel at the Center for Democracy & Technology. And, of course, we got the standard response:

“It’s not the best use of our time to rail against its existence. At a certain point, we need to figure out how to use it effectively, or at least with extensive oversight.”

Unfortunately, history has shown that that certain point usually arrives after something goes egregiously wrong. Then someone asks, “How could something like this happen?”

Bookmark and Share

It’s all happening too fast.

 

Since design fiction is my area of research and focus, I have covered the difference between it and science fiction in previous blogs. But the two are quite closely related. Let me start with science fiction. There are a plethora of definitions for SF. Here are two of my favorites.

The first is from Isaac Asimov:

“[Social] science fiction is that branch of literature which is concerned with the impact of scientific advance on human beings.” — Isaac Asimov, Science Fiction Writers of America Bulletin, 1951 1

The second is from Robert Heinlein:

“…realistic speculation about possible future events, based solidly on adequate knowledge of the real world, past and present, and on a thorough understanding of the scientific method.” 2

I especially like the first because it emphasizes people at the heart of the storytelling. The second definition speaks to real-world knowledge, and understanding of the scientific method. Here, there is a clear distinction between science fiction and fantasy. Star Wars is not science fiction. Even George Lucas admits this. In a conversation at the Sundance Film Festival last year he is quoted as saying, “Star Wars really isn’t a science-fiction film, it’s a fantasy film and a space opera.”3 While Star Wars involves space travel (which is technically science based), the story has no connection to the real world; it may as well be Lord of the Rings.

I bring up these distinctions because design fiction is a hybrid of science fiction, but there is a difference. Sterling defines design fiction as, “The deliberate use of diegetic prototypes to suspend disbelief about change.” Though even Sterling agrees that his definition is “heavy-laden” the operative word in his definition is “deliberate.” In other words, a primary operand of design fiction is the designers intent. There is a purpose for design fiction and it is to provoke discussion about the future. While it may entertain, that is not it’s purpose. It needs to be a provocation. For me, the more provocative, the better. The idea that we would go quietly into whatever future unfolds based upon whatever corporate or scientific manifesto is most profitable or most manageable makes me crazy.

The urgency arises in the fact that the future is moving way to fast. In The Lightstream Chronicles, some of the developments that I reserved for 25, 50 or even further into the future are showing signs of life in the next two or three years. Next week I will introduce you to a couple of these technologies.

 

1. http://io9.com/5622186/how-many-defintions-of-science-fiction-are-there
2. Heinlein, R., 1983. The SF book of lists. In: Jakubowski, M., Edwards, M. (Eds.), The SF Book of Lists. Berkley Books, New York, p. 257.
3. http://www.esquire.com/entertainment/movies/a32507/george-lucas-sundance-quotes/
Bookmark and Share

The end of code.

 

This week WIRED Magazine released their June issue announcing the end of code. That would mean that the ability to write code, as is so cherished in the job world right now, is on the way out. They attribute this tectonic shift to Artificial Intelligence, machine learning, neural networks and the like. In the future (which is taking place now) we won’t have to write code to tell computers what to do, we will just have to teach them. I have been over this before through a number of previous writings. An example: Facebook uses a form of machine learning by collecting data from millions of pictures that are posted on the social network. When someone loads a group photo and identifies the people in the shot, Facebook’s AI remembers it by logging the prime coordinates on a human face and attributing them to that name (aka facial recognition). If the same coordinates show up again in another post, Facebook identifies it as you. People load the data (on a massive scale), and the machine learns. By naming the person or persons in the photo, you have taught the machine.

The WIRED article makes some interesting connections about the evolution of our thinking concerning the mind, about learning, and how we have taken a circular route in our reasoning. In essence, the mind was once considered a black box; there was no way to figure it out, but you could condition responses, a la Pavlov’s Dog. That logic changes with cognitive science which is the idea that the brain is more like a computer. The computing analogy caught on, and researchers began to see the whole idea of thought, memory, and thinking as stuff you could code, or hack, just like a computer. Indeed, it is this reasoning that has led to the notion that DNA is, in fact, codable, hence splicing through Crispr. If it’s all just code, we can make anything. That was the thinking. Now there is machine learning and neural networks. You still code, but only to set up the structure by which the “thing” learns, but after that, it’s on its own. The result is fractal and not always predictable. You can’t go back in and hack the way it is learning because it has started to generate a private math—and we can’t make sense of it. In other words, it is a black box. We have, in effect, stymied ourselves.

There is an upside. To train a computer you used to have to learn how to code. Now you just teach it by showing or giving it repetitive information, something anyone can do, though, at this point, some do it better than others.

Always the troubleshooter, I wonder what happens when we—mystified at a “conclusion” or decision arrived at by the machine—can’t figure out how to make it stop arriving at that conclusion. You can do the math.

Do we just turn it off?

Bookmark and Share

Design fiction. I want to believe.

 

I have blogged in the past about logical succession. When it comes to creating realistic design fiction narrative, there needs to be a sense of believability. Coates1 calls this “plausible reasoning.”, “[…]putting together what you know to create a path leading to one or several new states or conditions, at a distance in time.” In other words, for the audience to suspend their disbelief, there has to be a basic understanding of how we got here. If you depict something that is too fantastic, your audience won’t buy it, especially if you are trying to say that, “This could happen.”

“When design fictions are conceivable and realistically executed they carry a greater potential for making an impact and focusing discussion and debate around these future scenarios.”2

In my design futures collaborative studio, I ask students to do a rigorous investigation of future technologies, the ones that are on the bleeding edge. Then I want them to ask, “What if?” It is easier said than done. Particularly because of technological convergence, the way technologies merge with other technologies to form heretofore unimagined opportunities.

There was an article this week in Wired Magazine concerning a company called Magic Leap. They are in the MR business, mixed reality as opposed to virtual reality. With MR, the virtual imagery happens within the space you’re in—in front of your eyes—rather than in an entirely virtual space. The demo from Wired’s site is pretty convincing. The future of MR and VR, for me, are easy to predict. Will it get more realistic? Yes. Will it get cheaper, smaller, and ubiquitous? Yes. At this point, a prediction like this is entirely logical. Twenty-five years ago it would not have been as easy to imagine.

As the Wired article states,

“[…]the arrival of mass-market VR wasn’t imminent.[…]Twenty-five years later a most unlikely savior emerged—the smartphone! Its runaway global success drove the quality of tiny hi-res screens way up and their cost way down. Gyroscopes and motion sensors embedded in phones could be borrowed by VR displays to track head, hand, and body positions for pennies. And the processing power of a modern phone’s chip was equal to an old supercomputer, streaming movies on the tiny screen with ease.”

To have predicted that VR would be where it is today with billions of dollars pouring into fledgling technologies and realistic, and utterly convincing demonstrations would have been illogical. It would have been like throwing a magnet into a bucket of nails, rolling it around and guessing which nails would end up coming out attached.

What is my point? I think it is important to remind ourselves that things will move blindingly fast particularly when companies like Google and Facebook are throwing money at them. Then, the advancement of one only adds to the possibilities of the next iteration possibly in ways that no one can predict. As VR or MR merges with biotech or artificial reality, or just about anything else you can imagine, the possibilities are endless.

Unpredictable technology makes me uncomfortable. Next week I’ll tell you why.

 

  1. Coates, J.F., 2010. The future of foresight—A US perspective. Technological Forecasting & Social Change 77, 1428–1437.
  2. E. Scott Denison. “Timed-release Design Fiction: A Digital Online Methodology to Provoke Reflection on our Socio- Technological Future.”  Edited by Michal Derda Nowakowski. ISBN: 978-1-84888-427-4 Interdisciplinary.net.
Bookmark and Share

Are we ready to be gods? Revisited.

 

I base today’s blog on a 2013 post with a look at the world from the perspective of The Lightstream Chronicles, which takes place in the year 2159. To me, this is a very plausible future. — ESD

 

There was a time when crimes were simpler. Humans committed crimes against other humans — not so simple anymore. In that world, you have the old-fashioned mano a mano, but you also have human against synthetic, and synthetic against the human. There are creative variations as well.

It was bound to happen. No sooner than the first lifelike robots became commercially available in the late 2020’s, there were issues of ethics and misuse. Though scientists and ethicists discussed the topic in the early part of the 21st century, the problems escalated faster than the robotics industry had conceived possible.

According to the 2007 Roboethics Roadmap,

“…problems inherent in the possible emergence of human function in the robot: like consciousness, free will, self-consciousness, sense of dignity, emotions, and so on. Consequently, this is why we have not examined problems — debated in literature — like the need not to consider robot as our slaves, or the need to guarantee them the same respect, rights and dignity we owe to human workers.”1

In the 21st century many of the concerns within the scientific community centered around what we as humans might do to infringe upon the “rights” of the robot. Back in 2007, it occurred to researchers that the discussion of roboethics needed to include more fundamental questions regarding the ethics of the robots’ designers, manufacturers and users. However, once in the role of the creator-god, they did not foresee how “unprepared” for that responsibility we were as a society, and how quickly humans would pervert the robot for formerly “unethical” uses, including but not limited to their modification for crime and perversion.

Nevertheless, more than 100 years later, when synthetic human production is at the highest levels in history, the questions of ethics in both humans and their creations remain a significant point of controversy. As the 2007 Roboethics Roadmap concluded, “It is absolutely clear that without a deep rooting of Roboethics in society, the premises for the implementation of an artificial ethics in the robots’ control systems will be missing.”

After these initial introductions of humanoid robots, now seen as almost comically primitive, the technology, and in turn the reasoning, emotions, personality and realism became progressively more sophisticated. Likewise, their implementations became progressively more like the society that manufactured them. They became images of their creators both benevolent and malevolent.

Schematic1Longm
In our image?

 

 

A series of laws were enacted to prevent the use of humanoid robots for criminal intent, yet at the same time, military interests were fully pursuing dispassionate automated humanoid robots with the express purpose of extermination. It was truly a time of paradoxical technologies. To further complicate the issue were ongoing debates on the nature of what was considered “criminal”. Could a robot become a criminal without human intervention? Is something criminal if it is consensual?

These issues ultimately evolved into complex social, economic, political, and legal entanglement that included heavy government regulation and oversight where such was achievable. As this complexity and infrastructure grew to accommodate the continually expanding technology, the greatest promise and challenges came almost 100 years after those first humanoid robots. With the advent of virtual human brains now being grown in labs, the readily identifiable differences between synthetic humans and real human gradually began to disappear. The similarities were so shocking and so undetectable that new legislation was enacted to restrict the use of virtual humans, and classification system was established to ensure visible distinctions for the vast variety of social synthetics.

The concerns of the very first Roboethics Roadmap are confirmed even 150 years into the future. Synthetics are still abused and used to perpetrate crimes. Their virtual humanness only adds an element of complexity, reality, and in some cases, horror to the creativity of how they are used.

 

 1 Euron Roboethics Roadmap
Bookmark and Share

Artificial intelligence isn’t really intelligence—yet. I hate to say I told you so.

 

Last week, we discovered that there is a new side to AI. And I don’t mean to gloat, but I saw this potential pitfall as fairly obvious. It is interesting that the real world event that triggered all the talk occurred within days of episode 159 of The Lightstream Chronicles. In my story, Keiji-T, a synthetic police investigator virtually indistinguishable from a human, questions the conclusions of an Artificial Intelligence engine called HAPP-E. The High Accuracy Perpetrator Profiling Engine is designed to assimilate all of the minutiae surrounding a criminal act and spit out a description of the perpetrator. In today’s society, profiling is a human endeavor and is especially useful in identifying difficult-to-catch offenders. Though the procedure is relatively new in the 21st century and goes by many different names, the American Psychological Association says,

“…these tactics share a common goal: to help investigators examine evidence from crime scenes and victim and witness reports to develop an offender description. The description can include psychological variables such as personality traits, psychopathologies and behavior patterns, as well as demographic variables such as age, race or geographic location. Investigators might use profiling to narrow down a field of suspects or figure out how to interrogate a suspect already in custody.”

This type of data is perfect for feeding into an AI, which uses neural networks and predictive algorithms to draw conclusions and recommend decisions. Of course, AI can do it in seconds whereas an FBI unit may take days, months, or even years. The way AI works, as I have reported many times before, is based on tremendous amounts of data. “With the advent of big data, the information going in only amplifies the veracity of the recommendations coming out.” In this way, machines can learn which is the whole idea behind autonomous vehicles making split-second decisions about what to do next based on billions of possibilities and only one right answer.

In my sci-fi episode mentioned above, Detective Guren describes a perpetrator produced by the AI known as HAPP-E . Keiji-T, forever the devil’s advocate, counters with this comment, “Data is just data. Someone who knows how a probability engine works could have adopted the characteristics necessary to produce this deduction.” In other words, if you know what the engine is trying to do, theoretically you could ‘teach’ the AI using false data to produce a false deduction.

Episode 159. It seems fairly obvious.
Episode 159. It seems fairly obvious.

I published Episode 159 on March 18, 2016. Then an interesting thing happened in the tech world. A few days later Microsoft launched an AI chatbot called Tay (a millennial nickname for Taylor) designed to have conversations with — millennials. The idea was that Tay would become as successful as their Chinese version named XiaoIce, which has been around for four years and engages millions of young Chinese in discussions of millennial angst with a chatbot. Tay used three platforms: Twitter, Kik and GroupMe.

Then something went wrong. In less than 24 hours, Tay went from tweeting that “humans are super cool” to full-blown Nazi. Soon after Tay launched, the super-sketchy enclaves of 4chan and 8chan decided to get malicious and manipulate the Tay engine feeding it racist and sexist invective. If you feed an AI enough garbage, it will ‘learn’ that garbage is the norm and begin to repeat it. Before Tay’s first day was over, Microsoft took it down, removed the offensive tweets and apologized.

Taygoeswild
Crazy talk.

Apparently, Microsoft, though it had considered that such a thing was possible, but decided not to use filters (conversations to avoid or canned answers to volatile subjects). Experts in the chatbot field were quick to criticize: “‘You absolutely do NOT let an algorithm mindlessly devour a whole bunch of data that you haven’t vetted even a little bit.’ In other words, Microsoft should have known better than to let Tay loose on the raw uncensored torrent of what Twitter could direct her way.”

The tech site, Arstechnica also probed the question of “…why Tay turned nasty when XiaoIce didn’t?” The assessment thus far is that China’s highly restrictive measures keep social media “ideologically appropriate”, and under control. The censors will close your account for bad behavior.

So, what did we learn from this? AI, at least as it exists today, has no understanding. It has no morals and or ethical behavior unless you give it some. Then next questions are: Who decides what is moral and ethical? Will it be the people (we saw what happened with that) or some other financial or political power? Maybe the problem is with the premise itself. What do you think?

Bookmark and Share

Logical succession, the final installment.

For the past couple of weeks, I have been discussing the idea posited by Ray Kurzweil, that we will have linked our neocortex to the Cloud by 2030. That’s less than 15 years, so I have been asking how that could come to pass with so many technological obstacles in the way. When you make a prediction of that sort, I believe you need a bit more than faith in the exponential curve of “accelerating returns.”

This week I’m not going to take issue with an enormous leap forward in the nanobot technology to accomplish such a feat. Nor am I going to question the vastly complicated tasks of connecting to the neocortex and extracting anything coherent, but also assembling memories, and consciousness and in turn, beaming it to the Cloud. Instead, I’m going to pose the question of, “Why we would want to do this in the first place?”

According to Kurzweil, in a talk last year at Singularity University,

“We’re going to be funnier. We’re going to be sexier. We’re going to be better at expressing loving sentiment…” 1

Another brilliant futurist, and friend of Ray, Peter Diamandis includes these additional benefits:

• Brain to Brain Communication – aka Telepathy
• Instant Knowledge – download anything, complex math, how to fly a plane, or speak another language
• Access More Powerful Computing – through the Cloud
• Tap Into Any Virtual World – no visor, no controls. Your neocortex thinks you are there.
• And more, including and extended immune system, expandable and searchable memories, and “higher-order existence.”2

As Kurzweil explains,

“So as we evolve, we become closer to God. Evolution is a spiritual process. There is beauty and love and creativity and intelligence in the world — it all comes from the neocortex. So we’re going to expand the brain’s neocortex and become more godlike.”1

The future sounds quite remarkable. My issue lies with Koestler’s “ghost in the machine,” or what I call humankind’s uncanny ability to foul things up. Diamandis’ list could easily spin this way:

  • Brain-To-Brain hacking – reading others thoughts
  • Instant Knowledge – to deceive, to steal, to subvert, or hijack.
  • Access to More Powerful Computing – to gain the advantage or any of the previous list.
  • Tap Into Any Virtual World – experience the criminal, the evil, the debauched and not go to jail for it.

You get the idea. Diamandis concludes, “If this future becomes reality, connected humans are going to change everything. We need to discuss the implications in order to make the right decisions now so that we are prepared for the future.”

Nevertheless, we race forward. We discovered this week that “A British researcher has received permission to use a powerful new genome-editing technique on human embryos, even though researchers throughout the world are observing a voluntary moratorium on making changes to DNA that could be passed down to subsequent generations.”3 That would be CrisprCas9.

It was way back in 1968 that Stewart Brand introduced The Whole Earth Catalog with, “We are as gods and might as well get good at it.”

Which lab is working on that?

 

1. http://www.huffingtonpost.com/entry/ray-kurzweil-nanobots-brain-godlike_us_560555a0e4b0af3706dbe1e2
2. http://singularityhub.com/2015/10/12/ray-kurzweils-wildest-prediction-nanobots-will-plug-our-brains-into-the-web-by-the-2030s/
3. http://www.nytimes.com/2016/02/02/health/crispr-gene-editing-human-embryos-kathy-niakan-britain.html?_r=0
Bookmark and Share

Logical succession, Part 2.

Last week the topic was Ray Kurzweil’s prediction that by 2030, not only would we send nanobots into our bloodstream by way of the capillaries, but they would target the neocortex, set up shop, connect to our brains and beam our thoughts and other contents into the Cloud (somewhere). Kurzweil is no crackpot. He is a brilliant scientist, inventor and futurist with an 86 percent accuracy rate on his predictions. Nevertheless, and perhaps presumptuously, I took issue with his prediction, but only because there was an absence of a logical succession. According to Coates,

“…the single most important way in which one comes to an understanding of the future, whether that is working alone, in a team, or drawing on other people… is through plausible reasoning, that is, putting together what you know to create a path leading to one or several new states or conditions, at a distance in time” (Coates 2010, p. 1436).1

Kurzweil’s argument is based heavily on his Law of Accelerating Returns that says (essentially), “We won’t experience 100 years of progress in the 21st century it will be more like 20,000 years of progress (at today’s rate).” The rest, in the absence of more detail, must be based on faith. Faith, perhaps in the fact that we are making considerable progress in architecting nanobots or that we see promising breakthroughs in mind-to-computer communication. But what seems to be missing is the connection part. Not so much connecting to the brain, but beaming the contents somewhere. Another question, why, also comes to mind, but I’ll get to that later.

There is something about all of this technological optimism that furrows my brow. A recent article in WIRED helped me to articulate this skepticism. The rather lengthy article chronicled the story of neurologist Phil Kennedy, who like Kurzweil believes that the day is soon approaching when we will connect or transfer our brains to other things. I can’t help but call to mind what one time Fed manager Alan Greenspan called, “irrational exuberance.” The WIRED article tells of how Kennedy nearly lost his mind by experimenting on himself (including rogue brain surgery in Belize) to implant a host of hardware that would transmit his thoughts. This highly invasive method, the article says is going out of style, but the promise seems to be the same for both scientists: our brains will be infinitely more powerful than they are today.

Writing in WIRED columnist Daniel Engber makes an astute statement. During an interview with Dr. Kennedy, they attempted to watch a DVD of Kennedy’s Belize brain surgery. The DVD player and laptop choked for some reason and after repeated attempts they were able to view Dr. Kennedy’s naked brain undergoing surgery. Reflecting on the mundane struggles with technology that preceded the movie, Engber notes, “It seems like technology always finds new and better ways to disappoint us, even as it grows more advanced every year.”

Dr. Kennedy’s saga was all about getting thoughts into text, or even synthetic speech. Today, the invasive method of sticking electrodes into your cerebral putty has been replaced by a kind of electrode mesh that lays on top of the cortex underneath the skull. They call this less invasive. Researchers have managed to get some results from this, albeit snippets with numerous inaccuracies. They say it will be decades, and one of them points out that even Siri still gets it wrong more than 30 years after the debut of speech recognition technology.
So, then it must be Kurzweil’s exponential law that still provides near-term hope for these scientists. As I often quote Tobias Revell, “Someone somewhere in a lab is playing with your future.”

There remain a few more nagging questions for me. What is so feeble about our brains that we need them to be infinitely more powerful? When is enough, enough? And, what could possibly go wrong with this scenario?

Next week.

 

1. Coates, J.F., 2010. The future of foresight—A US perspective. Technological Forecasting & Social Change 77, 1428–1437.
Bookmark and Share