Tag Archives: technology

Disruption. Part 2.


Last week I discussed the idea of technological disruption. Essentially, they are innovations that make fundamental changes in the way we work or live. In turn, these changes affect culture and behavior. Issues of design and culture are the stuff that interests me and my research: how easily and quickly our practices change as a result of the way we enfold technology. The advent of the railroad, mass produced automobiles, radio, then television, the Internet, and the smartphone all qualify as disruptions.

Today, technology advances more quickly. Technological development was never a linear idea, but because most of the tech advances of the last century were at the bottom of the exponential curve, we didn’t notice them. New technologies that are under development right now are going to being realized more quickly (especially the ones with big funding), and because of the idea of convergence, (the intermixing of unrelated technologies) their consequences will be less predictable.

One of my favorite futurists is Amy Webb whom I have written about before. In her most recent newsletter, Amy reminds us that the Internet was clunky and vague long before it was disruptive. She states,

“However, our modern Internet was being built without the benefit of some vital voices: journalists, ethicists, economists, philosophers, social scientists. These outside voices would have undoubtedly warned of the probable rise of botnets, Internet trolls and Twitter diplomacy––would the architects of our modern internet have done anything differently if they’d confronted those scenarios?”

Amy inadvertently left out the design profession, though I’m sure she will reconsider after we chat. Indeed, it is the design profession that is a key contributor to transformative tech and design thinkers, along with the ethicists and economists can help to visualize and reframe future visions.

Amy thinks that voice will be the next transformation will be our voice,

“From here forward, you can be expected to talk to machines for the rest of your life.”

Amy is referring to technologies like Alexa, Siri, Google, Cortana, and something coming soon called Bixby. The voices of these technologies are, of course, only the window dressing for artificial intelligence. But she astutely points out that,

“…we also know from our existing research that humans have a few bad habits. We continue to encode bias into our algorithms. And we like to talk smack to our machines. These machines are being trained not just to listen to us, but to learn from what we’re telling them.”

Such a merger might just be the mix of any technology (name one) with human nature or the human condition: AI meets Mike who lives across the hall. AI becoming acquainted with Mike may have been inevitable, but the fact that Mike happens to be a jerk was less predictable and so the outcome less so. The most significant disruptions of the future are going to come from the convergence of seemingly unrelated technologies. Sometimes innovation depends on convergence, like building an artificial human that will have to master a lot of different functions. Other times, convergence is accidental or at least unplanned. The engineers over at Boston Dynamics who are building those intimidating walking robots are focused a narrower set of criteria than someone creating an artificial human. Perhaps power and agility are their primary concern. Then, in another lab, there are technologists working on voice stress analysis, and in another setting, researchers are looking to create an AI that can choose your wardrobe. Somewhere else we are working on facial recognition or Augmented Reality or Virtual Reality or bio-engineering, medical procedures, autonomous vehicles or autonomous weapons. So it’s a lot like Harry meets Sally, you’re not sure what you’re going to get or how it’s going to work.

Digital visionary Kevin Kelly thinks that AI will be at the core of the next industrial revolution. Place the prefix “smart” in front of anything, and you have a new application for AI: a smart car, a smart house, a smart pump. These seem like universally useful additions, so far. But now let’s add the same prefix to the jobs you and I do, like a doctor, lawyer, judge, designer, teacher, or policeman. (Here’s a possible use for that ominous walking robot.) And what happens when AI writes better code than coders and decides to rewrite itself?

Hopefully, you’re getting the picture. All of this underscores Amy Webb’s earlier concerns. The ‘journalists, ethicists, economists, philosophers, social scientists’ and designers are rarely in the labs where the future is taking place. Should we be doing something fundamentally differently in our plans for innovative futures?

Side note: Convergence can happen in a lot of ways. The parent corporation of Boston Dynamics is X. I’ll use Wikipedia’s definition of X: “X, an American semi-secret research-and-development facility founded by Google in January 2010 as Google X, operates as a subsidiary of Alphabet Inc.”

Bookmark and Share

Disruption. Part 1


We often associate the term disruption with a snag in our phone, internet or other infrastructure service, but there is a larger sense of the expression. Technological disruption refers the to phenomenon that occurs when innovation, “…significantly alters the way that businesses operate. A disruptive technology may force companies to alter the way that they approach their business, risk losing market share or risk becoming irrelevant.”1

Some track the idea as far back as Karl Marx who influenced economist Joseph Schumpeter to coin the term “creative destruction” in 1942.2 Schumpeter described that as the “process of industrial mutation that incessantly revolutionizes the economic structure from within, incessantly destroying the old one, incessantly creating a new one.” But it was, “Clayton M. Christensen, a Harvard Business School professor, that described it’s current framework. “…a disruptive technology is a new emerging technology that unexpectedly displaces an established one.”3

OK, so much for the history lesson. How does this affect us? Historical examples of technological disruption go back to the railroads, and the mass produced automobile, technologies that changed the world. Today we can point to the Internet as possibly this century’s most transformative technology to date. However, we can’t ignore the smartphone, barely ten years old which has brought together a host of converging technologies substantially eliminating the need for the calculator, the dictaphone, land lines, the GPS box that you used to put on your dashboard, still and video cameras, and possibly your privacy. With the proliferation of apps within the smartphone platform, there are hundreds if not thousands of other “services” that now do work that we had previously done by other means. But hold on to your hat. Technological disruption is just getting started. For the next round, we will see an increasingly pervasive Internet of Things (IoT), advanced robotics, exponential growth in Artificial Intelligence (AI) and machine learning, ubiquitous Augmented Reality (AR), Virtual Reality (VR), Blockchain systems, precise genetic engineering, and advanced renewable energy systems. Some of these such as Blockchain Systems will have potentially cataclysmic effects on business. Widespread adoption of blockchain systems that enable digital money would eliminate the need for banks, credit card companies, and currency of all forms. How’s that for disruptive? Other innovations will just continue to transform us and our behaviors. Over the next few weeks, I will discuss some of these potential disruptions and their unique characteristics.

Do you have any you would like to add?

1 http://www.investopedia.com/terms/d/disruptive-technology.asp#ixzz4ZKwSDIbm

2 http://www.investopedia.com/terms/c/creativedestruction.asp

3 http://www.intelligenthq.com/technology/12-disruptive-technologies/

See also: Disruptive technologies: Catching the wave, Journal of Product Innovation Management, Volume 13, Issue 1, 1996, Pages 75-76, ISSN 0737-6782, http://dx.doi.org/10.1016/0737-6782(96)81091-5.

Bookmark and Share

Thought leaders and followers.


Next week, the World Future Society is having its annual conference. As a member, I really should be going, but I can’t make it this year. The future is a dicey place. There are people convinced that we can create a utopia, some are warning of dystopia, and the rest are settled somewhere in between. Based on promotional emails that I have received, one of the topics is “The Future of Evolution and Human Nature.” According to the promo,

“The mixed emotions and cognitive dissonance that occur inside each of us also scale upward into our social fabric: implicit bias against new perspectives, disdain for people who represent “other”, the fear of a new world that is not the same as it has always been, and the hopelessness that we cannot solve our problems. We know from experience that this negativity, hatred, fear, and hopelessness is not what it seems like on the surface: it is a reaction to change. And indeed we are experiencing a period of profound change.” There is a larger story of our evolution that extends well beyond the negativity and despair that feels so real to us today. It’s a story of redefining and building infrastructure around trust, hope and empathy. It’s a story of accelerating human imagination and leveraging it to create new and wondrous things.

It is a story of technological magic that will free us from scarcity and ensure a prosperous lifestyle for everyone, regardless of where they come from.”

Woah. I have to admit, this kind of talk that makes me uncomfortable. Are fear of a new world, negativity, hatred, and fear reactions to change? Will technosocial magic solve all our problems? This type of rhetoric sounds more like a movement than a conference that examines differing views on an important topic. It would seem to frame caution as fear and negativity, and then we throw in that hyperbole hatred. Does it sound like the beginning of an agenda with a framework that characterizes those who disagree as haters? I think it does. It’s a popular tactic.

These views do not by any means reflect the opinions of the entire WFS membership, but there is a significant contingent, such as the folks from Humanity+, which hold the belief that we can fix human evolution—even human nature—with technology. For me, this is treading into thorny territory.

What is human nature? Merriam-Webster online provides this definition:

“[…]the nature of humans; especially: the fundamental dispositions and traits of humans.” Presumably, we include good traits and bad traits. Will our discussions center on which features to fix and which to keep or enhance? Who will decide?

What about the human condition? Can we change this? Should we? According to Wikipedia,

“The human condition is “the characteristics, key events, and situations which compose the essentials of human existence, such as birth, growth, emotionality, aspiration, conflict, and mortality.” This is a very broad topic which has been and continues to be pondered and analyzed from many perspectives, including those of religion, philosophy, history, art, literature, anthropology, sociology, psychology, and biology.”

Clearly, there are a lot of different perspectives to be represented here. Do we honestly believe that technology will answer them all sufficiently? The theme of the upcoming WFS conference is “A Brighter Future IS Possible.” No doubt there will be a flurry of technosocial proposals presented there, and we should not put them aside as a bunch of fringe futurists. These voices are thought-leaders. They lead thinking. Are we thinking? Are we paying attention? If so, then it’s time to discuss and debate these issues, or others will decides without us.

Bookmark and Share

Future Shock


As you no doubt have heard, Alvin Toffler died on June 27, 2016, at the age of 87. Mr. Toffler was a futurist. The book for which he is best known, Future Shock was a best seller in 1970 and was considered required college reading at the time. In essence, Mr. Toffler said that the future would be a disorienting place if we just let it happen. He said we need to pay attention.

Credit: Susan Wood/Getty Images from The New York Times 2016
Credit: Susan Wood/Getty Images from The New York Times 2016

This week, The New York Times published an article entitled Why We Need to Pick Up Alvin Toffler’s Torch by Farhad Manjoo. As Manjoo observes, at one time (the 1950s, 1960s, and 1970s), the study of foresight and forecasting was important stuff that governments and corporations took seriously. Though I’m not sure I agree with Manjoo’s assessment of why that is no longer the case, I do agree that it is no longer the case.

“In many large ways, it’s almost as if we have collectively stopped planning for the future. Instead, we all just sort of bounce along in the present, caught in the headlights of a tomorrow pushed by a few large corporations and shaped by the inescapable logic of hyper-efficiency — a future heading straight for us. It’s not just future shock; we now have future blindness.”

At one time, this was required reading.
At one time, this was required reading.

When I attended the First International Conference on Anticipation in 2015, I was pleased to discover that the blindness was not everywhere. In fact, many of the people deeply rooted in the latest innovations in science and technology, architecture, social science, medicine, and a hundred other fields are very interested in the future. They see an urgency. But most governments don’t and I fear that most corporations, even the tech giants are more interested in being first with the next zillion-dollar technology than asking if that technology is the right thing to do. Even less they are asking what repercussions might flow from these advancements and what are the ramifications of today’s decision making. We just don’t think that way.

I don’t believe that has to be the case. The World Future Society for example at their upcoming conference in Washington, DC will be addressing the idea of futures studies as a requirement for high school education. They ask,

“Isn’t it surprising that mainstream education offers so little teaching on foresight? Were you exposed to futures thinking when you were in high school or college? Are your children or grandchildren taught how decisions can be made using scenario planning, for example? Or take part in discussions about what alternative futures might look like? In a complex, uncertain world, what more might higher education do to promote a Futurist Mindset?”

It certainly needs to be part of design education, and it is one of the things I vigorously promote at my university.

As Manjoo sums up in his NYT article,

“Of course, the future doesn’t stop coming just because you stop planning for it. Technological change has only sped up since the 1990s. Notwithstanding questions about its impact on the economy, there seems no debate that advances in hardware, software and biomedicine have led to seismic changes in how most of the world lives and works — and will continue to do so.

Yet, without soliciting advice from a class of professionals charged with thinking systematically about the future, we risk rushing into tomorrow headlong, without a plan.”

And if that isn’t just crazy, at the very least it’s dangerous.



Bookmark and Share

Logical succession, the final installment.

For the past couple of weeks, I have been discussing the idea posited by Ray Kurzweil, that we will have linked our neocortex to the Cloud by 2030. That’s less than 15 years, so I have been asking how that could come to pass with so many technological obstacles in the way. When you make a prediction of that sort, I believe you need a bit more than faith in the exponential curve of “accelerating returns.”

This week I’m not going to take issue with an enormous leap forward in the nanobot technology to accomplish such a feat. Nor am I going to question the vastly complicated tasks of connecting to the neocortex and extracting anything coherent, but also assembling memories, and consciousness and in turn, beaming it to the Cloud. Instead, I’m going to pose the question of, “Why we would want to do this in the first place?”

According to Kurzweil, in a talk last year at Singularity University,

“We’re going to be funnier. We’re going to be sexier. We’re going to be better at expressing loving sentiment…” 1

Another brilliant futurist, and friend of Ray, Peter Diamandis includes these additional benefits:

• Brain to Brain Communication – aka Telepathy
• Instant Knowledge – download anything, complex math, how to fly a plane, or speak another language
• Access More Powerful Computing – through the Cloud
• Tap Into Any Virtual World – no visor, no controls. Your neocortex thinks you are there.
• And more, including and extended immune system, expandable and searchable memories, and “higher-order existence.”2

As Kurzweil explains,

“So as we evolve, we become closer to God. Evolution is a spiritual process. There is beauty and love and creativity and intelligence in the world — it all comes from the neocortex. So we’re going to expand the brain’s neocortex and become more godlike.”1

The future sounds quite remarkable. My issue lies with Koestler’s “ghost in the machine,” or what I call humankind’s uncanny ability to foul things up. Diamandis’ list could easily spin this way:

  • Brain-To-Brain hacking – reading others thoughts
  • Instant Knowledge – to deceive, to steal, to subvert, or hijack.
  • Access to More Powerful Computing – to gain the advantage or any of the previous list.
  • Tap Into Any Virtual World – experience the criminal, the evil, the debauched and not go to jail for it.

You get the idea. Diamandis concludes, “If this future becomes reality, connected humans are going to change everything. We need to discuss the implications in order to make the right decisions now so that we are prepared for the future.”

Nevertheless, we race forward. We discovered this week that “A British researcher has received permission to use a powerful new genome-editing technique on human embryos, even though researchers throughout the world are observing a voluntary moratorium on making changes to DNA that could be passed down to subsequent generations.”3 That would be CrisprCas9.

It was way back in 1968 that Stewart Brand introduced The Whole Earth Catalog with, “We are as gods and might as well get good at it.”

Which lab is working on that?


1. http://www.huffingtonpost.com/entry/ray-kurzweil-nanobots-brain-godlike_us_560555a0e4b0af3706dbe1e2
2. http://singularityhub.com/2015/10/12/ray-kurzweils-wildest-prediction-nanobots-will-plug-our-brains-into-the-web-by-the-2030s/
3. http://www.nytimes.com/2016/02/02/health/crispr-gene-editing-human-embryos-kathy-niakan-britain.html?_r=0
Bookmark and Share

Logical succession, Part 2.

Last week the topic was Ray Kurzweil’s prediction that by 2030, not only would we send nanobots into our bloodstream by way of the capillaries, but they would target the neocortex, set up shop, connect to our brains and beam our thoughts and other contents into the Cloud (somewhere). Kurzweil is no crackpot. He is a brilliant scientist, inventor and futurist with an 86 percent accuracy rate on his predictions. Nevertheless, and perhaps presumptuously, I took issue with his prediction, but only because there was an absence of a logical succession. According to Coates,

“…the single most important way in which one comes to an understanding of the future, whether that is working alone, in a team, or drawing on other people… is through plausible reasoning, that is, putting together what you know to create a path leading to one or several new states or conditions, at a distance in time” (Coates 2010, p. 1436).1

Kurzweil’s argument is based heavily on his Law of Accelerating Returns that says (essentially), “We won’t experience 100 years of progress in the 21st century it will be more like 20,000 years of progress (at today’s rate).” The rest, in the absence of more detail, must be based on faith. Faith, perhaps in the fact that we are making considerable progress in architecting nanobots or that we see promising breakthroughs in mind-to-computer communication. But what seems to be missing is the connection part. Not so much connecting to the brain, but beaming the contents somewhere. Another question, why, also comes to mind, but I’ll get to that later.

There is something about all of this technological optimism that furrows my brow. A recent article in WIRED helped me to articulate this skepticism. The rather lengthy article chronicled the story of neurologist Phil Kennedy, who like Kurzweil believes that the day is soon approaching when we will connect or transfer our brains to other things. I can’t help but call to mind what one time Fed manager Alan Greenspan called, “irrational exuberance.” The WIRED article tells of how Kennedy nearly lost his mind by experimenting on himself (including rogue brain surgery in Belize) to implant a host of hardware that would transmit his thoughts. This highly invasive method, the article says is going out of style, but the promise seems to be the same for both scientists: our brains will be infinitely more powerful than they are today.

Writing in WIRED columnist Daniel Engber makes an astute statement. During an interview with Dr. Kennedy, they attempted to watch a DVD of Kennedy’s Belize brain surgery. The DVD player and laptop choked for some reason and after repeated attempts they were able to view Dr. Kennedy’s naked brain undergoing surgery. Reflecting on the mundane struggles with technology that preceded the movie, Engber notes, “It seems like technology always finds new and better ways to disappoint us, even as it grows more advanced every year.”

Dr. Kennedy’s saga was all about getting thoughts into text, or even synthetic speech. Today, the invasive method of sticking electrodes into your cerebral putty has been replaced by a kind of electrode mesh that lays on top of the cortex underneath the skull. They call this less invasive. Researchers have managed to get some results from this, albeit snippets with numerous inaccuracies. They say it will be decades, and one of them points out that even Siri still gets it wrong more than 30 years after the debut of speech recognition technology.
So, then it must be Kurzweil’s exponential law that still provides near-term hope for these scientists. As I often quote Tobias Revell, “Someone somewhere in a lab is playing with your future.”

There remain a few more nagging questions for me. What is so feeble about our brains that we need them to be infinitely more powerful? When is enough, enough? And, what could possibly go wrong with this scenario?

Next week.


1. Coates, J.F., 2010. The future of foresight—A US perspective. Technological Forecasting & Social Change 77, 1428–1437.
Bookmark and Share

Logical succession, please.

In this blog, I wouldn’t be surprised to discover that of all the people I talk (or rant) about most is Ray Kurzweil. That is not all that surprising to me since he is possibly the most visible and vociferous and visionary proponent of the future. Let me say in advance that I have great respect for Ray. A Big Think article three years ago claimed that
“… of the 147 predictions that Kurzweil has made since the 1990’s, fully 115 of them have turned out to be correct, and another 12 have turned out to be “essentially correct” (off by a year or two), giving his predictions a stunning 86% accuracy rate.”

Last year Kurzweil predicted that
“ In the 2030s… we are going to send nano-robots into the brain (via capillaries) that will provide full immersion virtual reality from within the nervous system and will connect our neocortex to the cloud. Just like how we can wirelessly expand the power of our smartphones 10,000-fold in the cloud today, we’ll be able to expand our neocortex in the cloud.”1

This prediction caught my attention as not only quite unusual but, considering that it is only 15 years away, incredibly ambitious. Since 2030 is right around the corner, I wanted to see if anyone has been able to connect to the neocortex yet. Before I could do that, however, I needed to find out what exactly the neocortex is. According to Science Daily, it is the top layer of the brain (which is made up of six layers). “It is involved in higher functions such as sensory perception, generation of motor commands, spatial reasoning, conscious thought, and in humans, language.”2 According to Kurzweil, “There is beauty, love and creativity and intelligence in the world, and it all comes from the neocortex.”3

OK, so on to how we connect. Kurzweil predicts nanobots will do this though he doesn’t say how. Nanobots, however, are a reality. Scientists have designed nanorobotic origami, which can fold itself into shapes on the molecular level and molecular vehicles that are drivable. Without additional detail, I can only surmise that once our nano-vehicles have assembled themselves, they will drive to the highest point and set up an antenna and, violå, we will be linked.


Neurons of the Neocortex stained with golgi’s methode - Photograph: Benjamin Bollmann
Neurons of the Neocortex stained with golgi’s methode – Photograph: Benjamin Bollmann

I don’t let my students get away with predictions like that, so why should Kurzweil? Predictions should engage more than just existing technologies (such as nanotech and brain mapping); they need demonstrate plausible breadcrumbs that make such a prediction legitimate. Despite the fact that Ray gives a great TED talk, it still didn’t answer those questions. I’m a big believer that technological convergence can foster all kinds of unpredictable possibilities, but the fact that scientists are working on a dozen different technological breakthroughs in nanoscience, bioengineering, genetics, and even mapping the connections of the neocortex4, doesn’t explain how we will tap into it or transmit it.

If anyone has a theory on this, please join the discussion.

1. http://bigthink.com/endless-innovation/why-ray-kurzweils-predictions-are-right-86-of-the-time
2. http://www.sciencedaily.com/terms/neocortex.htm
3. http://www.dailymail.co.uk/sciencetech/article-3257517/Human-2-0-Nanobot-implants-soon-connect-brains-internet-make-super-intelligent-scientist-claims.html#ixzz3xtrHUFKP
4. http://www.neuroscienceblueprint.nih.gov/connectome/

Photo from: http://connectomethebook.com/?portfolio=neurons-of-the-neocortex

Bookmark and Share

Enter the flaw.


I promised a drone update this week, but by now, it is probably already old news. It is a safe bet there are probably a few thousand more drones than last week. Hence, I’m going to shift to a topic that I think is moving even faster than our clogged airspace.

And now for an AI update. I’ve blogged previously about Kurzweil’s Law of Accelerating Returns, but the evidence is mounting every day that he’s probably right.  The rate at which artificial intelligence is advancing is beginning to match nicely with his curve. A recent article on the Txchnologist website demonstrates how an AI system called Kulitta, is composing jazz, classical, new age and eclectic mixes that are difficult to tell from human compositions. You can listen to an example here. Not bad actually. Sophisticated AI creations like this underscore the realization that we can no longer think of robotics as the clunky mechanized brutes. AI can create. Even though it’s studying an archive of man-made creations the resulting work is unique.

First it learns from a corpus of existing compositions. Then it generates an abstract musical structure. Next it populates this structure with chords. Finally, it massages the structure and notes into a specific musical framework. In just a few seconds, out pops a musical piece that nobody has ever heard before.

The creator of Kulitta, Donya Quick says that this will not put composers out of a job, it will help them do their job better. She doesn’t say how exactly.

If even trained ears can’t always tell the difference, what does that mean for the masses? When we can load the “universal composer” app onto our phone and have a symphony written for ourselves, how will this serve the interests of musicians and authors?

The article continues:

Kulitta joins a growing list of programs that can produce artistic works. Such projects have reached a critical mass–last month Dartmouth College computational scientists announced they would hold a series of contests. They have put a call out seeking artificial intelligence algorithms that produce “human-quality” short stories, sonnets and dance music. These will be pitted against compositions made by humans to see if people can tell the difference.

The larger question to me is this: “When it all sounds wonderful or reads like poetry, will it make any difference to us who created it?”

Sadly, I think not. The sweat and blood that composers and artists pour into their compositions could be a thing of the past. If we see this in the fine arts, then it seems an inevitable consequence for design as well. Once the AI learns the characters, behaviors and personalities of the characters in The Lightstream Chronicles, it can create new episodes without me. Taking characters and setting that already exist as CG constructs, it’s not a stretch that it will be able to generate the wireframes, render the images, and layout the panels.

Would this app help me in my work? It could probably do it in a fraction of the time that it would take me, but could I honestly say it’s mine?

When art and music are all so easily reconstructed and perfect, I wonder if we will miss the flaw. Will we miss that human scratch on the surface of perfection, the thing that reminds us that we are human?

There is probably an algorithm for that, too. Just go to settings > humanness and use the slider.

Bookmark and Share

The killer feature for every app.

I have often asked the question: If we could visit the future “in-person” how would it affect us upon our return? How vigorously would we engage our redefined present? Part of the idea behind design fiction, for me, is making the future seem real enough to us that we want to discuss it and ask ourselves is this the future we want. If not, what can we do about it, how might it be changed, refined, or avoided altogether? In a more pessimistic light, I also wonder whether anything could be real enough to rouse us from our media-induced stupor. And the potion is getting stronger.

After Monday and Tuesday this week I was beginning to think it would be a slow news week in the future-tech sector. Not so. (At least I didn’t stumble on to them until Wednesday.)

1. Be afraid.

A scary new novel is out called Ghost Fleet. It sounds immensely entertaining, but also ominously possible. It harkens back to some of my previous blogs on autonomous weapons and the harbinger of ubiquitous hacking. How am I going to get time to read this? That’s another issue.

2. Play it again.

Google applied for this years ago, but their patent on storing “memories” was approved this week. It appears as though it would have been a feature for the ill-fated Google Glass but could easily be embedded in any visual recording function from networked cameras to a user’s contact lens. Essentially it lets you “play-back” whatever you saw, assuming you are wearing or integrating the appropriate recording device, or software. “Siri, replay my vacation!” I must admit it sounds cool.

Ghost Fleet, Google memories, uber hacking, Thync.
Ghost Fleet, Google memories, uber hacking, and Thync.

3. Hack-a-mania.

How’s this for a teaser? RESEARCHERS HACKED THE BRAKES OF A CORVETTE WITH TEXT MESSAGES. That’s what Fast Company threw out there on Wednesday, but it originated with WIRED magazine. It’s the latest since the Jeep-Jacking incident just weeks ago. See how fast technology moves? In that episode the hackers, or jackers, whatever, used their laptops to control just about every technology the Jeep had available. However, according to WIRED,

“…a new piece of research suggests there may be an even easier way for hackers to wirelessly access those critical driving functions: Through an entire industry of potentially insecure, internet-enabled gadgets plugged directly into cars’ most sensitive guts.”

In this instance,

“A 2-inch-square gadget that’s designed to be plugged into cars’ and trucks’ dashboards and used by insurance firms and trucking fleets to monitor vehicles’ location, speed and efficiency.”

The article clearly demonstrates that these devices are vulnerable to attack, even in government vehicles and, I presume the White House limo as well. You guys better get to work on that.

4. Think about this.

A new $300 device called Thync is now available to stick on your forehead to either relax or energize you through neurosignaling, AKA  electricity, that zaps your brain “safely”. It’s not unrelated to the less sexy shock therapy of ages past. Reports tell me that this is anything but all figured out, but just like the above list, it’s just a matter of time until it escalates to the next level.

So what ties all these together? If we look at the historical track of technology, the overarching theme is convergence. All the things that once were separate have now converged. Movies, texts, phone calls, games, GPS, bar-code scanning, cameras and about a thousand other technologies have converged into your phone or your laptop, or tablet. It is a safe bet to see that this trend will continue, in addition to getting smaller and eventually implanted. Isn’t technology wonderful?

The only problem is that we have yet to figure out the security issues. Do we, for one moment, think that hacking will go away? We rush new apps and devices to market with a “We’ll fix that later,” mentality. It’s just a matter of time until your energy, mood, “memories”, or our national security are up for grabs. Seems like security ought to be on the feature list of every new gadget, especially the ones that access out bodies, our safety, or our information. That’s pretty much everything, by the way. The idea is especially important because, let’s face it, everything we think is secure, isn’t.

Bookmark and Share

The robo-apocalypse. Part 1.

Talk of robot takeovers is all the rage right now.

I’m good with this because the evidence is out there that robots will continue to get smarter and smarter but the human condition, being what it is, we will continue to do stupid s**t. Here are some examples from the news this week.

1. The BBC reported this week that South Korea has deployed something called The Super aEgis II, a 50-caliber robotic machine gun that knows who is an enemy and who isn’t. At least that’s the plan. The company that built and sells the Super aEgis is DoDAAM. Maybe that is short for do damage. The BBC astutely notes,

“Science fiction writer Isaac Asimov’s First Law of Robotics, that ‘a robot may not injure a human being or, through inaction, allow a human being to come to harm’, looks like it will soon be broken.”

Asimov was more than a great science-fiction writer, he was a Class A futurist. He clearly saw the potential for us to create robots that were smarter and more powerful than we are. He figured there should be some rules. Asimov used the kind of foresight that responsible scientists, technologists and designers should be using for everything we create. As the article continues, Simon Parkin of the BBC quotes Yangchan Song, DoDAAM’s managing director of strategy planning.

“Automated weapons will be the future. We were right. The evolution has been quick. We’ve already moved from remote control combat devices, to what we are approaching now: smart devices that are able to make their own decisions.”

Or in the words of songwriter Donald Fagen,

“A just machine to make big decisions
Programmed by fellows with compassion and vision…1

Relax. The world is full of these fellows. Right now the weapon/robot is linked to a human who gives the OK to fire, and all customers who purchased the 30 units thus far have opted for the human/robot interface. But the company admits,

“If someone came to us wanting a turret that did not have the current safeguards we would, of course, advise them otherwise, and highlight the potential issues,” says Park. “But they will ultimately decide what they want. And we develop to customer specification.”

A 50 caliber round. Heavy damage.
A 50 caliber round. Heavy damage.

They are currently working on the technology that will help their machine make the right decision on its own., but the article cites several academics and researchers who see red flags waving. Most concur that teaching a robot right from wrong is no easy task. Compound the complexity because the fellows who are doing the programming don’t always agree on these issues.

Last week I wrote about Google’s self-driving car. Of course, this robot has to make tough decisions too. It may one day have to decide whether to hit the suddenly appearing baby carriage, the kid on the bike, or just crash the vehicle. In fact, Parkin’s article brings Google into the picture as well, quoting Colin Allen,

“Google admits that one of the hardest problems for their programming is how an automated car should behave at a four-way stop sign…”

Humans don’t do such a good job at that either. And there is my problem with all of this. If the humans who are programming these machines are still wrestling with what is ethically right or wrong, can a robot be expected to do better. Some think so. Over at DoDAMM,

“Ultimately, we would probably like a machine with a very sound basis to be able to learn for itself, and maybe even exceed our abilities to reason morally.”

Based on what?

Next week: Drones with pistols.


1. Donald Fagen, IGY From the Night Fly album. 1982
Bookmark and Share