Tag Archives: ethics

Watching and listening.

 

Pay no attention to Alexa, she’s an AI.

There was a flurry of reports from dozens of news sources (including CNN) last week that an Amazon Echo, (Alexa), called the police during a New Mexico incident of domestic violence. The alleged call began a SWAT standoff, and the victim’s boyfriend was eventually arrested. Interesting story, but after a fact-check, that could not be what happened. Several sources including the New York Times and WIRED debunked the story with details on how Alexa calling 911 is technologically impossible, at least for now. And although the Bernalillo, New Mexico County Sheriff’s Department swears to it, according to WIRED,

“Someone called the police that day. It just wasn’t Alexa..”

Even Amazon agrees from a spokesperson email,

“The receiving end would also need to have an Echo device or the Alexa app connected to Wi-Fi or mobile data, and they would need to have Alexa calling/messaging set up,”1

So it didn’t happen, but most agree, while it may be technologically impossible today, it probably won’t be for very long. The provocative side of the WIRED article proposed this thought:

“The Bernalillo County incident almost certainly had nothing to do with Alexa. But it presents an opportunity to think about issues and abilities that will become real sooner than you might think.”

On the upside, some see benefits from the ability of Alexa to intervene in a domestic dispute that could turn lethal, but they fear something called “false positives.” Could an off handed comment prompt Alexa to make a call to the police? And if it did would you feel as though Alexa had overstepped her bounds?

Others see the potential in suicide prevention. Alexa could calm you down or make suggestions for ways to move beyond the urge to die.

But as we contemplate opening this door, we need to acknowledge that we’re letting these devices listen to us 24/7 and giving them the permission to make decisions on our behalf whether we want them to or not. The WIRED article also included a comment from Evan Selinger of RIT (whom I’ve quoted before).

“Cyberservants will exhibit mission creep over time. They’ll take on more and more functions. And they’ll habituate us to become increasingly comfortable with always-on environments listening to our intimate spaces.”

These technologies start out as warm and fuzzy (see the video below) but as they become part of our lives, they can change us and not always for the good. This idea is something I contemplated a couple of years ago with my Ubiquitous Surveillance future. In this case, the invasion was not as a listening device but with a camera (already part of Amazon’s Echo Look). You can check that out and do your own provocation by visiting the link.

I’m glad that there are people like Susan Liautaud (who I wrote about last week) and Evan Selinger who are thinking about the effects of technology on society, but I still fear most of us take the stance of Dan Reidenberg, who is also quoted in the WIRED piece.

“‘I don’t think we can avoid this. This is where it is going to go. It is really about us adapting to that,” he says.’”

 

Nonsense! That’s like getting in the car with a drunk driver and then doing your best to adapt. Nobody is putting a gun to your head to get into the car. There are decisions to be made here, and they don’t have to be made after the technology has created seemingly insurmountable problems or intrusions in our lives. The companies that make them should be having these discussions now, and we should be invited to share our opinions.

What do you think?

 

  1. http://wccftech.com/alexa-echo-calling-911/
Bookmark and Share

Ethical tech.

Though I tinge most of my blogs with ethical questions, the last time I brought up this topic specifically on this was back in 2015. I guess I am ready to give it another go. Ethics is a tough topic. If we deal with this purely superficially, ethics would seem natural, like common sense, or the right thing to do. But if that’s the case, why do so many people do the wrong thing? Things get even more complicated if we move into institutionally complex issues like banking, or governing, technology, genetics, health care or national defense, just to name a few.

The last time I wrote about this, I highlighted Michael Sandel Professor of Philosophy and Government at Harvard’s Law School, where he teaches a wildly popular course called “Justice.” Then, I was glad to see that the big questions were still being addressed in in places like Harvard. Some of his questions then, which came from a FastCo article, were:

“Is it right to take from the rich and give to the poor? Is it right to legislate personal safety? Can torture ever be justified? Should we try to live forever? Buy our way to the head of the line? Create perfect children?”

These are undoubtedly important and prescient questions to ask, especially as we are beginning to confront technologies that make things which were formerly inconceivable or plain impossible, not only possible but likely.

So I was pleased to see last month, an op-ed piece in WIRED by Susan Liautaud founder of The Ethics Incubator. Susan is about as closely aligned to my tech concerns as anyone I have read. And she brings solid thinking to the issues.

“Technology is approaching the man-machine and man-animal
boundaries. And with this, society may be leaping into humanity defining innovation without the equivalent of a constitutional convention to decide who should have the authority to decide whether, when, and how these innovations are released into society. What are the ethical ramifications? What checks and balances might be important?”

Her comments are right in line with my research and co-research into Humane Technologies. Liataud continues:

“Increasingly, the people and companies with the technological or scientific ability to create new products or innovations are de facto making policy decisions that affect human safety and society. But these decisions are often based on the creator’s intent for the product, and they don’t always take into account its potential risks and unforeseen uses. What if gene-editing is diverted for terrorist ends? What if human-pig chimeras mate? What if citizens prefer to see birds rather than flying cars when they look out a window? (Apparently, this is a real risk. Uber plans to offer flight-hailing apps by 2020.) What if Echo Look leads to mental health issues for teenagers? Who bears responsibility for the consequences?”

For me, the answer to that last question is all of us. We should not rely on business and industry to make these decisions, nor expect our government to do it. We have to become involved in these issues at the public level.

Michael Sandel believes that the public is hungry for these issues, but we tend to shy away from them. They can be confrontational and divisive, and no one wants to make waves or be politically incorrect. That’s a mistake.

An image from the future. A student design fiction project that examined ubiquitous AR.

So while the last thing I want is a politician or CEO making these decisions, these two constituencies could do the responsible thing and create forums for these discussions so that the public can weigh in on them. To do anything less, borders on arrogance.

Ultimately we will have to demand this level of thought, beginning with ourselves. This responsibility should start with anticipatory methodologies that examine the social, cultural and behavioral ramifications, and unintended consequences of what we create.

But we should not fight this alone. Corporations and governments concerned with appearing sensitive and proactive toward the environment and social justice need to add a new pillar to their edifice as responsible global citizens: humane technology.

 

Bookmark and Share

An example of impending convergence.

 

The IBM Research Alliance and partners have announced this week that they have developed “…an industry-first process to build silicon nanosheet transistors that will enable 5 nanometer (nm) chips – achieving a scale of 30 billion switches on a fingernail-sized chip that will deliver significant power and performance enhancements over today’s state-of-the-art 10nm chips.”

Silicon nanosheet transistors at 5nm

Along with this new development there, of course, come promises that the technology

“…can deliver 40 percent performance enhancement at fixed power, or 75 percent power savings at matched performance. This improvement enables a significant boost to meeting the future demands of artificial intelligence (AI) systems, virtual reality and mobile devices.”

That’s a lot of tech-speech, but essentially it means your computing will happen faster, your devices will be more powerful and use less battery life.

In a previous blog, I discussed the nanometer idea.

“A nanometer is very small. Nanotech concerns itself with creations that exist in the 100nm range and below, roughly 7,500 times smaller than a human hair. In the Moore’s Law race, nanothings are the next frontier in cramming data onto a computer chip, or implanting them into our brains or living cells.”

Right now, IBM and their partners see this new development as a big plus to the future of their cognitive systems. What are cognitive systems?

IBM can answer that:

“Humans are on the cusp of augmenting their lives in extraordinary ways with AI. At IBM Research Labs around the globe, we envision and develop next-generation systems that work side-by-side with humans, accelerating our ability to create, learn, make decisions and think. We also architect the future of Watson, which has evolved from an IBM Research project to the world’s first and most-advanced AI platform.”

So it’s Watson and lots of other AI that may see the biggest benefits as a result of this new tech. With smaller, faster, more efficient chips AI can live a more robust life inside your phone or another device. But thinking phone is probably thinking way too big. Think of something much smaller but just as powerful.

Of course, every new technology comes with promises.

“Whether exploring new technical capabilities, collaborating on ethical practices or applying Watson technology to cancer research, financial decision-making, oil exploration or educational toys, IBM Research is shaping the future of AI.”

It’s all about AI and how we can augment “our lives in extraordinary ways.” Assuming that everyone plays nice, this is another example of technology poised for great things for humankind. Undoubtedly, micro-sized AI can be used for all sorts of nefarious purposes so let’s hope that the “ethical practices” part of their research is getting equal weight.

The question we have yet to ask is whether a faster, smaller, more powerful, all-knowing, steadily accelerating AI is something we truly need. This is a debate worth having. In the meantime, a 5 nm chip breakthrough is an excellent example of how a new, breakthrough technology awaits application by others for a myriad of purposes, advancing them all, in particular ways, by leaps and bounds. Who are these others? And what will they do next?

The right thing to do. Remember that idea?

Bookmark and Share

Of autonomous machines.

 

Last week we talked about how converging technologies can sometimes yield unpredictable results. One of the most influential players in the development of new technology is DARPA and the defense industry. There is a lot of technological convergence going on in the world of defense. Let’s combine robotics, artificial intelligence, machine learning, bio-engineering, ubiquitous surveillance, social media, and predictive algorithms for starters. All of these technologies are advancing at an exponential pace. It’s difficult to take a snapshot of any one of them at a moment in time and predict where they might be tomorrow. When you start blending them the possibilities become downright chaotic. With each step, it is prudent to ask if there is any meaningful review. What are the ramifications for error as well as success? What are the possibilities for misuse? Who is minding the store? We can hope that there are answers to these questions that go beyond platitudes like, “Don’t stand in the way of progress.”, “Time is of the essence.”, or “We’ll cross that bridge when we come to it.”

No comment.

I bring this up after having seen some unclassified documents on Human Systems, and Autonomous Defense Systems (AKA autonomous weapons). (See a previous blog on this topic.) Links to these documents came from a crowd-funded “investigative journalist” Nafeez Ahmed, publishing on a website called INSURGE intelligence.

One of the documents entitled Human Systems Roadmap is a slide presentation given to the National Defense Industry Association (NDIA) conference last year. The list of agencies involved in that conference and the rest of the documents cited reads like an alphabet soup of military and defense organizations which most of us have never heard of. There are multiple components to the pitch, but one that stands out is “Autonomous Weapons Systems that can take action when needed.” Autonomous weapons are those that are capable of making the kill decision without human intervention. There is also, apparently some focused inquiry into “Social Network Research on New Threats… Text Analytics for Context and Event Prediction…” and “full spectrum social media analysis.” We could get all up in arms about this last feature, but recent incidents in places such as, Benghazi, Egypt, and Turkey had a social networking component that enabled extreme behavior to be quickly mobilized. In most cases, the result was a tragic loss of life. In addition to sharing photos of puppies, social media, it seems, is also good at organizing lynch mobs. We shouldn’t be surprised that governments would want to know how to predict such events in advance. The bigger question is how we should intercede and whether that decision should be made by a human being or a machine.

There are lots of other aspects and lots more documents cited in Ahmed’s lengthy albeit activistic report, but the idea here is that rapidly advancing technology is enabling considerations which were previously held to be science fiction or just impossible. Will we reach the point where these systems are fully operational before we reach the point where we know they are totally safe? It’s a problem when technology grows faster that policy, ethics or meaningful review. And it seems to me that it is always a problem when the race to make something work is more important than the understanding the ramifications if it does.

To be clear, I’m not one of those people who thinks that anything and everything that the military can conceive of is automatically wrong. We will never know how many catastrophes that our national defense services have averted by their vigilance and technological prowess. It should go without saying that the bad guys will get more sophisticated in their methods and tactics, and if we are unable to stay ahead of the game, then we will need to get used to the idea of catastrophe. When push comes to shove, I want the government to be there to protect me. That being said, I’m not convinced that the defense infrastructure (or any part of the tech sector for that matter) is as diligent to anticipate the repercussions of their creations as they are to get them functioning. Only individuals can insist on meaningful review.

Thoughts?

 

Bookmark and Share

Artificial intelligence isn’t really intelligence—yet. I hate to say I told you so.

 

Last week, we discovered that there is a new side to AI. And I don’t mean to gloat, but I saw this potential pitfall as fairly obvious. It is interesting that the real world event that triggered all the talk occurred within days of episode 159 of The Lightstream Chronicles. In my story, Keiji-T, a synthetic police investigator virtually indistinguishable from a human, questions the conclusions of an Artificial Intelligence engine called HAPP-E. The High Accuracy Perpetrator Profiling Engine is designed to assimilate all of the minutiae surrounding a criminal act and spit out a description of the perpetrator. In today’s society, profiling is a human endeavor and is especially useful in identifying difficult-to-catch offenders. Though the procedure is relatively new in the 21st century and goes by many different names, the American Psychological Association says,

“…these tactics share a common goal: to help investigators examine evidence from crime scenes and victim and witness reports to develop an offender description. The description can include psychological variables such as personality traits, psychopathologies and behavior patterns, as well as demographic variables such as age, race or geographic location. Investigators might use profiling to narrow down a field of suspects or figure out how to interrogate a suspect already in custody.”

This type of data is perfect for feeding into an AI, which uses neural networks and predictive algorithms to draw conclusions and recommend decisions. Of course, AI can do it in seconds whereas an FBI unit may take days, months, or even years. The way AI works, as I have reported many times before, is based on tremendous amounts of data. “With the advent of big data, the information going in only amplifies the veracity of the recommendations coming out.” In this way, machines can learn which is the whole idea behind autonomous vehicles making split-second decisions about what to do next based on billions of possibilities and only one right answer.

In my sci-fi episode mentioned above, Detective Guren describes a perpetrator produced by the AI known as HAPP-E . Keiji-T, forever the devil’s advocate, counters with this comment, “Data is just data. Someone who knows how a probability engine works could have adopted the characteristics necessary to produce this deduction.” In other words, if you know what the engine is trying to do, theoretically you could ‘teach’ the AI using false data to produce a false deduction.

Episode 159. It seems fairly obvious.
Episode 159. It seems fairly obvious.

I published Episode 159 on March 18, 2016. Then an interesting thing happened in the tech world. A few days later Microsoft launched an AI chatbot called Tay (a millennial nickname for Taylor) designed to have conversations with — millennials. The idea was that Tay would become as successful as their Chinese version named XiaoIce, which has been around for four years and engages millions of young Chinese in discussions of millennial angst with a chatbot. Tay used three platforms: Twitter, Kik and GroupMe.

Then something went wrong. In less than 24 hours, Tay went from tweeting that “humans are super cool” to full-blown Nazi. Soon after Tay launched, the super-sketchy enclaves of 4chan and 8chan decided to get malicious and manipulate the Tay engine feeding it racist and sexist invective. If you feed an AI enough garbage, it will ‘learn’ that garbage is the norm and begin to repeat it. Before Tay’s first day was over, Microsoft took it down, removed the offensive tweets and apologized.

Taygoeswild
Crazy talk.

Apparently, Microsoft, though it had considered that such a thing was possible, but decided not to use filters (conversations to avoid or canned answers to volatile subjects). Experts in the chatbot field were quick to criticize: “‘You absolutely do NOT let an algorithm mindlessly devour a whole bunch of data that you haven’t vetted even a little bit.’ In other words, Microsoft should have known better than to let Tay loose on the raw uncensored torrent of what Twitter could direct her way.”

The tech site, Arstechnica also probed the question of “…why Tay turned nasty when XiaoIce didn’t?” The assessment thus far is that China’s highly restrictive measures keep social media “ideologically appropriate”, and under control. The censors will close your account for bad behavior.

So, what did we learn from this? AI, at least as it exists today, has no understanding. It has no morals and or ethical behavior unless you give it some. Then next questions are: Who decides what is moral and ethical? Will it be the people (we saw what happened with that) or some other financial or political power? Maybe the problem is with the premise itself. What do you think?

Bookmark and Share

Logical succession, the final installment.

For the past couple of weeks, I have been discussing the idea posited by Ray Kurzweil, that we will have linked our neocortex to the Cloud by 2030. That’s less than 15 years, so I have been asking how that could come to pass with so many technological obstacles in the way. When you make a prediction of that sort, I believe you need a bit more than faith in the exponential curve of “accelerating returns.”

This week I’m not going to take issue with an enormous leap forward in the nanobot technology to accomplish such a feat. Nor am I going to question the vastly complicated tasks of connecting to the neocortex and extracting anything coherent, but also assembling memories, and consciousness and in turn, beaming it to the Cloud. Instead, I’m going to pose the question of, “Why we would want to do this in the first place?”

According to Kurzweil, in a talk last year at Singularity University,

“We’re going to be funnier. We’re going to be sexier. We’re going to be better at expressing loving sentiment…” 1

Another brilliant futurist, and friend of Ray, Peter Diamandis includes these additional benefits:

• Brain to Brain Communication – aka Telepathy
• Instant Knowledge – download anything, complex math, how to fly a plane, or speak another language
• Access More Powerful Computing – through the Cloud
• Tap Into Any Virtual World – no visor, no controls. Your neocortex thinks you are there.
• And more, including and extended immune system, expandable and searchable memories, and “higher-order existence.”2

As Kurzweil explains,

“So as we evolve, we become closer to God. Evolution is a spiritual process. There is beauty and love and creativity and intelligence in the world — it all comes from the neocortex. So we’re going to expand the brain’s neocortex and become more godlike.”1

The future sounds quite remarkable. My issue lies with Koestler’s “ghost in the machine,” or what I call humankind’s uncanny ability to foul things up. Diamandis’ list could easily spin this way:

  • Brain-To-Brain hacking – reading others thoughts
  • Instant Knowledge – to deceive, to steal, to subvert, or hijack.
  • Access to More Powerful Computing – to gain the advantage or any of the previous list.
  • Tap Into Any Virtual World – experience the criminal, the evil, the debauched and not go to jail for it.

You get the idea. Diamandis concludes, “If this future becomes reality, connected humans are going to change everything. We need to discuss the implications in order to make the right decisions now so that we are prepared for the future.”

Nevertheless, we race forward. We discovered this week that “A British researcher has received permission to use a powerful new genome-editing technique on human embryos, even though researchers throughout the world are observing a voluntary moratorium on making changes to DNA that could be passed down to subsequent generations.”3 That would be CrisprCas9.

It was way back in 1968 that Stewart Brand introduced The Whole Earth Catalog with, “We are as gods and might as well get good at it.”

Which lab is working on that?

 

1. http://www.huffingtonpost.com/entry/ray-kurzweil-nanobots-brain-godlike_us_560555a0e4b0af3706dbe1e2
2. http://singularityhub.com/2015/10/12/ray-kurzweils-wildest-prediction-nanobots-will-plug-our-brains-into-the-web-by-the-2030s/
3. http://www.nytimes.com/2016/02/02/health/crispr-gene-editing-human-embryos-kathy-niakan-britain.html?_r=0
Bookmark and Share

Logical succession, Part 2.

Last week the topic was Ray Kurzweil’s prediction that by 2030, not only would we send nanobots into our bloodstream by way of the capillaries, but they would target the neocortex, set up shop, connect to our brains and beam our thoughts and other contents into the Cloud (somewhere). Kurzweil is no crackpot. He is a brilliant scientist, inventor and futurist with an 86 percent accuracy rate on his predictions. Nevertheless, and perhaps presumptuously, I took issue with his prediction, but only because there was an absence of a logical succession. According to Coates,

“…the single most important way in which one comes to an understanding of the future, whether that is working alone, in a team, or drawing on other people… is through plausible reasoning, that is, putting together what you know to create a path leading to one or several new states or conditions, at a distance in time” (Coates 2010, p. 1436).1

Kurzweil’s argument is based heavily on his Law of Accelerating Returns that says (essentially), “We won’t experience 100 years of progress in the 21st century it will be more like 20,000 years of progress (at today’s rate).” The rest, in the absence of more detail, must be based on faith. Faith, perhaps in the fact that we are making considerable progress in architecting nanobots or that we see promising breakthroughs in mind-to-computer communication. But what seems to be missing is the connection part. Not so much connecting to the brain, but beaming the contents somewhere. Another question, why, also comes to mind, but I’ll get to that later.

There is something about all of this technological optimism that furrows my brow. A recent article in WIRED helped me to articulate this skepticism. The rather lengthy article chronicled the story of neurologist Phil Kennedy, who like Kurzweil believes that the day is soon approaching when we will connect or transfer our brains to other things. I can’t help but call to mind what one time Fed manager Alan Greenspan called, “irrational exuberance.” The WIRED article tells of how Kennedy nearly lost his mind by experimenting on himself (including rogue brain surgery in Belize) to implant a host of hardware that would transmit his thoughts. This highly invasive method, the article says is going out of style, but the promise seems to be the same for both scientists: our brains will be infinitely more powerful than they are today.

Writing in WIRED columnist Daniel Engber makes an astute statement. During an interview with Dr. Kennedy, they attempted to watch a DVD of Kennedy’s Belize brain surgery. The DVD player and laptop choked for some reason and after repeated attempts they were able to view Dr. Kennedy’s naked brain undergoing surgery. Reflecting on the mundane struggles with technology that preceded the movie, Engber notes, “It seems like technology always finds new and better ways to disappoint us, even as it grows more advanced every year.”

Dr. Kennedy’s saga was all about getting thoughts into text, or even synthetic speech. Today, the invasive method of sticking electrodes into your cerebral putty has been replaced by a kind of electrode mesh that lays on top of the cortex underneath the skull. They call this less invasive. Researchers have managed to get some results from this, albeit snippets with numerous inaccuracies. They say it will be decades, and one of them points out that even Siri still gets it wrong more than 30 years after the debut of speech recognition technology.
So, then it must be Kurzweil’s exponential law that still provides near-term hope for these scientists. As I often quote Tobias Revell, “Someone somewhere in a lab is playing with your future.”

There remain a few more nagging questions for me. What is so feeble about our brains that we need them to be infinitely more powerful? When is enough, enough? And, what could possibly go wrong with this scenario?

Next week.

 

1. Coates, J.F., 2010. The future of foresight—A US perspective. Technological Forecasting & Social Change 77, 1428–1437.
Bookmark and Share

Logical succession, please.

In this blog, I wouldn’t be surprised to discover that of all the people I talk (or rant) about most is Ray Kurzweil. That is not all that surprising to me since he is possibly the most visible and vociferous and visionary proponent of the future. Let me say in advance that I have great respect for Ray. A Big Think article three years ago claimed that
“… of the 147 predictions that Kurzweil has made since the 1990’s, fully 115 of them have turned out to be correct, and another 12 have turned out to be “essentially correct” (off by a year or two), giving his predictions a stunning 86% accuracy rate.”

Last year Kurzweil predicted that
“ In the 2030s… we are going to send nano-robots into the brain (via capillaries) that will provide full immersion virtual reality from within the nervous system and will connect our neocortex to the cloud. Just like how we can wirelessly expand the power of our smartphones 10,000-fold in the cloud today, we’ll be able to expand our neocortex in the cloud.”1

This prediction caught my attention as not only quite unusual but, considering that it is only 15 years away, incredibly ambitious. Since 2030 is right around the corner, I wanted to see if anyone has been able to connect to the neocortex yet. Before I could do that, however, I needed to find out what exactly the neocortex is. According to Science Daily, it is the top layer of the brain (which is made up of six layers). “It is involved in higher functions such as sensory perception, generation of motor commands, spatial reasoning, conscious thought, and in humans, language.”2 According to Kurzweil, “There is beauty, love and creativity and intelligence in the world, and it all comes from the neocortex.”3

OK, so on to how we connect. Kurzweil predicts nanobots will do this though he doesn’t say how. Nanobots, however, are a reality. Scientists have designed nanorobotic origami, which can fold itself into shapes on the molecular level and molecular vehicles that are drivable. Without additional detail, I can only surmise that once our nano-vehicles have assembled themselves, they will drive to the highest point and set up an antenna and, violå, we will be linked.

 

Neurons of the Neocortex stained with golgi’s methode - Photograph: Benjamin Bollmann
Neurons of the Neocortex stained with golgi’s methode – Photograph: Benjamin Bollmann

I don’t let my students get away with predictions like that, so why should Kurzweil? Predictions should engage more than just existing technologies (such as nanotech and brain mapping); they need demonstrate plausible breadcrumbs that make such a prediction legitimate. Despite the fact that Ray gives a great TED talk, it still didn’t answer those questions. I’m a big believer that technological convergence can foster all kinds of unpredictable possibilities, but the fact that scientists are working on a dozen different technological breakthroughs in nanoscience, bioengineering, genetics, and even mapping the connections of the neocortex4, doesn’t explain how we will tap into it or transmit it.

If anyone has a theory on this, please join the discussion.

1. http://bigthink.com/endless-innovation/why-ray-kurzweils-predictions-are-right-86-of-the-time
2. http://www.sciencedaily.com/terms/neocortex.htm
3. http://www.dailymail.co.uk/sciencetech/article-3257517/Human-2-0-Nanobot-implants-soon-connect-brains-internet-make-super-intelligent-scientist-claims.html#ixzz3xtrHUFKP
4. http://www.neuroscienceblueprint.nih.gov/connectome/

Photo from: http://connectomethebook.com/?portfolio=neurons-of-the-neocortex

Bookmark and Share

A paralyzing electro magnetic laser: future possibility or sheer fantasy?

In episode 134, the Techman is paralyzed, lifted off the ground and thumped back to the floor. Whether it’s electrostatic, electromagnetic or superconductor electricity reduced to a hand-held device, the concept seems valid, especially 144 years from now. Part of my challenge is to make this design fiction logical by pulling threads of current research and technology to extrapolate possible futures. Mind you, it’s not a prediction, but a possibility. Here is my thinking:

Keiji’s weapon assumes that at least four technologies come together sometime in the next 14 decades. Safe bet? To start with the beam has to penetrate the door and significantly stun the subject. This idea is not that far-fetched. Weapons like this are already on the drawing board. For instance, the military is currently working on something called laser-guided directed-energy weapons. They work like “artificial lightning” to disable human targets. According to Defense Update,

Laser-Induced Plasma Channel (LIPC) technology was developed by Ionatron to channel electrical energy through the air at the target. The interaction of the air and laser light at specific wavelength, causes light to break into filaments, which form a plasma channel that conducts the energy like a virtual wire. This technology can be adjusted for non-lethal or lethal use. “

The imaginative leap here is that the beam can penetrate the wall to find it’s target. Given the other advancements, I feel reasonably safe stretching on this one.

LIPC at work.
LIPC at work.

Next, you have to get the subject off the ground. Lifting a 200-pound human would require at least two technologies assisted by a third. First is a levitating superconductor. A levitating superconductor uses electric current from a superconductor to produce magnetic forces that could counter the force of gravity. According to physics.org:

“Like frogs, humans are about two-thirds water, so if you had a big enough Bitter electromagnet, there’s no reason why a human couldn’t be levitated diamagnetically. None of the frogs that have taken part in the diamagnetic levitation experiments have experienced any adverse effects, which bodes well for any future human guinea pigs.”

The other ingredient is a highly powerful magnet. If we had a superconductor with a few decades of refinement and miniaturization, it’s conceivable that it could produce magnetic forces counter to the force of gravity. 1

The final component would be the power source small enough to fit inside the weapon and carrying enough juice to generate the plasma, and magnetic field for at least fifteen seconds. Today, you can buy a million-volt stun device on Amazon.com for around $50 and thyristor semiconductor technology could help ramp up the power surge necessary to sustain the arc.  Obviously, I’m not an engineer, but if you are, please feel free to chime in.

1. http://helios.gsfc.nasa.gov/qa_gp_elm.html

Bookmark and Share

What could happen.

1.  about last week

I’ll be the first to acknowledge that my blog last week was a bit depressing. However, if I thought, the situation was hopeless, I wouldn’t be doing this in the first place. I believe we have to acknowledge our uncanny ability to foul things up and, as best we can, design the gates and barriers into new technology to help prevent its abuse. And even though it may seem that way sometimes, I am not a technology pessimist or purely dystopian futurist. In truth, I’m tremendously excited about a plethora of new technologies and what they promise for the future.

2.  see the future

Also last week (by way of asiaone.com) Dr. Michio Kaku spoke in Singapore served up this future within the next 50 years.

“Imagine buying things just by blinking. Imagine doctors making an artificial heart for you within 20 hours. Imagine a world where garbage costs more than computer chips.”

Personally, I believe he’s too conservative. I see it happening much sooner. Kaku is a one of a handful of famous futurists, and his “predictions” have a lot of science behind them. So who am I to argue with him? He’s a brilliant scientist, prolific author, and educator. Most futurists or forecasters will be the first to tell you that their futures are not predictions but rather possible futures. According to forecaster Paul Saffo, “The goal of forecasting is not to predict the future but to tell you what you need to know to take meaningful action in the present.”1

According to Saffo “… little is certain, nothing is preordained, and what we do in the present affects how events unfold, often in significant, unexpected ways.”

Though my work is design fiction, I agree with Saffo. We both look at the future the same way. The objective behind my fictions is to jar us into thinking about the future so that it doesn’t surprise us. The more that our global citizenry thinks about the future and how it may impact them, the more likely that they will get involved. At least that is my hope. Hence, it is why I look for design fictions that will break out of the academy or the gallery show and seep into popular culture. The future needs to be an inclusive conversation.

Of course, the future is a broad topic: it impacts everything and everyone. So much of what we take for granted today could be entirely different—possibly even unrecognizable—tomorrow. Food, medicine, commerce, communication, privacy, security, entertainment, transportation, education, and jobs are just a few of the enormously important areas for potentially radical change. Saffo and Kaku don’t know what the future will bring any more than I do. We just look at what it could bring. I tend to approach it from the perspective of “What could go wrong?” Others take a more balanced view, and some look only at the positives. It is these perspectives that create the dialog and debate, which is what they are supposed to do. We also have to be careful that we don’t see these opinions as fact. Ray Kurzweil sees the equivalent of 20,000 years of change packed into the 21st century. Kaku (from the article mentioned above) sees computers being relegated to the

“‘dull, dangerous and dirty’ jobs that are repetitive, such as punching in data, assembling cars and any activity involving middlemen who do not contribute insights, analyses or gossip.’ To be employable, he stresses, you now have to excel in two areas: common sense and pattern recognition. Professionals such as doctors, lawyers and engineers who make value judgments will continue to thrive, as will gardeners, policemen, construction workers and garbage collectors.”

Looks like Michio and I disagree again. The whole idea behind artificial intelligence is in the area of predictive algorithms that use big data to learn. Machine learning programs detect patterns in data and adjust program actions accordingly.2 The idea of diagnosing illnesses, advising humans on potential human behaviors,  analyzing soil, site conditions and limitations, or even collecting trash are will within the realm of artificial intelligence. I see these jobs every bit as vulnerable as those of assembly line workers.

That, of course, is all part of the discussion—that we need to have.

 

1 Harvard Business Review | July–August 2007 | hbr.org
2. http://www.machinelearningalgorithms.com
Bookmark and Share