Tag Archives: systems

Heady stuff.

Last week we talked about how some researchers and scientists on the cutting edge are devising guidelines to attempt to ensure that potentially transformative technologies (like AI) remain safe and beneficial, rather than becoming a threat to humanity. And then, there were industries (like nanotech) that have already blown past any attempt at a meaningful review and now exist in thousands of consumer products, and nobody knows if their safe and the companies who produce them don’t even have to tell us they are part of the composition.

This week I’m going to talk about why I look askance at transformative technologies. Maybe it is because I am a writer at heart. Fiction, specifically science fiction, has captured my attention since childhood. It is my genre of choice. Now that nearly all of the science-based science fiction is no longer fiction, our tendency is to think that the only thing left to do is react or adapt. I can understand this since you can’t isolate a single technology as a thing, you can’t identify precisely from where it started, or how it morphed into what it is. Technologies converge, and they become systems, and systems are dauntingly complex. As humans, we create things that become systems. Even in non-digital times, the railroad ushered in a vastly complex system so much so that we had to invent other things just to deal with it, like a clock. What good was a train if it wasn’t on time? And what good was your time if it wasn’t the same as my time?

Fast forward. Does the clock have any behavioral effect in your life?

My oft-quoted scholars at ASU, Allenby, and Sarewitz see things like trains as level one technologies. They spawn systems in the level two realm that are often far more intricate than figuring out how to get this train contraption to run on rails across the United States.

So the nature of convergence and the resulting complexity of systems is one reason for my wariness of transformative tech.Especially now, that we are building things and we don’t understand how they work. We are inventing things that don’t need us to teach them, and that means that we can’t be sure what they are learning or how. If we can barely understand the complexity of the system that has grown up around the airline industry (which we at one time inherently grasped), how are we going to understand systems that spring up around these inventions that, at the core, we know what they do, but don’t know how?

The second reason is human nature. Your basic web dictionary defines the sociology of human nature as: “[…]the character of human conduct, generally regarded as produced by living in primary groups.” Appreciating things like love and compassion, music and art, consciousness, thought, languages and memory are characteristics of human nature. So are evil and vice, violence and hatred, the quest for power and greed. The latter have a tendency to undermine our inventions for good. Sometimes they are our downfall.

With history as our teacher, if we go blindly forward paying little attention to reason one, the complexity of systems, or reason two, the potential for bad actors, or both, that does not bode well.

I’ve been rambling a bit, so I have to wrap this up. I’ve taken a long way around to say that if you are among those who look at all this tech, and the unimaginable scope of the systems we have created and that the only thing left to do is react or adapt, that this is not the case.

While I can see the dark cloud behind every silver lining, it enables me to bring an umbrella on occasion.

Paying attention to the seemingly benign and insisting on a meaningful review of that which we don’t fully understand is the first step. It may seem as though it will be easier to adapt, but I don’t think so.

I guess that’s the reason behind this blog, behind my graphic novel, and my ongoing research and activism through design fiction. If you’re not paying attention, then I’ll remind you.

Bookmark and Share

Future proof.

 

There is no such thing as future proof anything, of course, so I use the term to refer to evidence that a current idea is becoming more and more probable of something we will see in the future. The evidence I am talking about surfaced in a FastCo article this week about biohacking and the new frontier of digital implants. Biohacking has a loose definition and can reference using genetic material without regard to ethical procedures, to DIY biology, to pseudo-bioluminescent tattoos, to body modification for functional enhancement—see transhumanism. Last year, my students investigated this and determined that a society willing to accept internal implants was not a near-future scenario. Nevertheless, according to FastCo author Steven Melendez,

“a survey released by Visa last year that found that 25% of Australians are ‘at least slightly interested’ in paying for purchases through a chip implanted in their bodies.”

Melendez goes on to describe a wide variety of implants already in use for medical, artistic and personal efficiency and interviews Tim Shank, president of a futurist group called TwinCities+. Shank says,

“[For] people with Android phones, I can just tap their phone with my hand, right over the chip, and it will send that information to their phone..”

implants
Amal Graafstra’s Hands [Photo: courtesy of Amal Graafstra] c/o WIRED
The popularity of body piercings and tattoos— also once considered as invasive procedures—has skyrocketed. Implantable technology, especially as it becomes more functionally relevant could follow a similar curve.

I saw this coming some years ago when writing The Lightstream Chronicles. The story, as many of you know, takes place in the far future where implantable technology is mundane and part of everyday life. People regulate their body chemistry access the Lightstream (the evolved Internet) and make “calls” using their fingertips embedded with Luminous Implants. These future implants talk directly to implants in the brain, and other systemic body centers to make adjustments or provide information.

An ad for Luminous Implants, and the "tap" numbers for local attractions.
An ad for Luminous Implants, and the “tap” numbers for local attractions.
Bookmark and Share

When the stakes are low, mistakes are beneficial. In more weighty pursuits, not so much.

 

I’m from the old school. I suppose, that sentence alone makes me seem like a codger. Let’s call it the eighties. Part of the art of problem solving was to work toward a solution and get it as tight as we possibly could before we committed to implementation. It was called the design process and today it’s called “design thinking.” So it was heresy to me when I found myself, some years ago now, in a high-tech corporation where this was a doctrine ignored. I recall a top-secret, new product meeting in which the owner and chief technology officer said, “We’re going to make some mistakes on this, so let’s hurry up and make them.” He was not speaking about iterative design, which is part and parcel of the design process, he was talking about going to market with the product and letting the users illuminate what we should fix. Of course, the product was safe and met all the legal standards, but it was far from polished. The idea was that mass consumer trial-by-fire would provide us with an exponentially higher data return than if we tested all the possible permutations in a lab at headquarters. He was, apparently, ahead of his time.

In a recent FastCo article on Facebook’s race to be the leader in AI, author Daniel Terdiman cites some of Mark Zuckerberg’s mantras: “‘Move fast and break things,’ or ‘Done is better than perfect.’” We can debate this philosophically or maybe even ethically, but it is clearly today’s standard procedure for new technologies, new science and the incessant race to be first. Here is a quote from that article:

“Artificial intelligence has become a vital part of scaling Facebook. It’s already being used to recognize the faces of your friends in photographs, and curate your newsfeed. DeepText, an engine for reading text that was unveiled last week, can understand “with near-human accuracy” the content in thousands of posts per second, in more than 20 different languages. Soon, the text will be translated into a dozen different languages, automatically. Facebook is working on recognizing your voice and identifying people inside of videos so that you can fast forward to the moment when your friend walks into view.”

The story goes on to say that Facebook, though it is pouring tons of money into AI, is behind the curve, having begun only three or so years ago. Aside from the fact that FB’s accomplishments seem fairly impressive (at least to me), people like Google and Microsoft are apparently way ahead. In the case of Microsoft, the effort began more than twenty years ago.

Today, the hurry up is accelerated by open sourcingWikipedia explains the benefits of open sourcing as:

“The open-source model, or collaborative development from multiple independent sources, generates an increasingly more diverse scope of design perspective than any one company is capable of developing and sustaining long term.”

The idea behind open sourcing is that the mistakes will happen even faster along with the advancements. It is becoming the de facto approach to breakthrough technologies. If fast is the primary, maybe even the only goal, it is a smart strategy. Or is it a touch short sighted? As we know, not everyone who can play with the code that a company has given them has that company’s best interests in mind. As for the best interests of society, I’m not sure those are even on the list.

To examine our motivations and the ripples that emanate from them, of course, is my mission with design fiction and speculative futures. Whether we like it or not, a by-product of technological development—aside from utopia—is human behavior. There are repercussions from the things we make and the systems that evolve from them. When your mantra is “Move fast and break things,” that’s what you’ll get. But there is certainly no time the move-fast loop to consider the repercussions of your actions, or the unexpected consequences. Consequences will appear all by themselves.

The technologists tell us that when we reach the holy grail of AI (whatever that is), we will be better people and solve the world’s most challenging problems. But in reality, it’s not that simple. With the nuances of AI, there are potential problems, or mistakes, that could be difficult to fix; new predicaments that humans might not be able to solve and AI may not be inclined to resolve on our behalf.

In the rush to make mistakes, how grave will they be? And, who is responsible?

Bookmark and Share