Tag Archives: systems

Because we can.

 

It has happened to me more than once. I come up with what I think is a brilliant and seemingly original idea, do some preliminary research to make sure there aren’t already a hundred other ideas (at least published ones) just like it, and then I set to work sketching it out. Then, (and it could be a matter of days to weeks) BAM, there is my idea fully fleshed out, rendered and published—by someone else. I usually end up kicking myself for not having thought of it sooner or at least bringing it to fruition somehow instantaneously. The reality is, however, that for that fully rendered version to get published the creator(s) would have had to come up with the idea before me. Perhaps this amplifies the notion that there are no original ideas left in the world. Or, as an old friend used to argue, these concepts are floating around in a kind of ever-changing, cosmic psychosphere from which creative minds serendipitously siphon their ideas. So, of course, we’re going to have the same thoughts, we drink the same water. I think, perhaps the former.

Using this as a backdrop, however, I examine the idea of the so-called white hat hacker. There are hackers out there (good guys reportedly) that are always looking for new possible threats and vulnerabilities to the world of code, systems, software, and platforms. Sometimes their pursuits are purely imaginary, taking on the form of “What if?” scenarios, and then rolling up their sleeves to see if they can infect or penetrate the system or software in question. Then, in their benevolence, they share it with the world to make code and systems safer for all of us. Hmm. Okay, I’ll play along.

Recently, a team like this encoded some malware into physical strands of DNA. Huh? The story was reported by WIRED’s (man I wish they’d stick to technology reporting) Andy Greenberg last week. In theory, because DNA can maintain its structure for hundreds of years or more, you could theoretically store data within its indelible strands. (Remember the mosquitos frozen in amber from Jurassic Park?) And even though DNA is electron-microscope-small it is still a physical thing, full of code all its own. So, it would seem that a University of Washington computer science professor decided to slip some malware code into a strand of physical DNA and then when the code is deciphered or uploaded so to speak, the malware is in the system.

“‘We know that if an adversary has control over the data a computer is processing, it can potentially take over that computer,” says Tadayoshi Kohno, the University of Washington computer science professor who led the project, comparing the technique to traditional hacker attacks that package malicious code in web pages or an email attachment…’”

In this case, it is,

“‘…the information stored in the DNA they’re sequencing.’”

I don’t know. I’m not sure hackers should be messing with this stuff. PHOTO: wallpaperup

 

So hacking into some DNA sequencing software gets you what? There is apparently the opportunity (if you make rival DNA sequencing software) to steal some intellectual property or a malcontent could screw with somebody’s DNA analysis, you could plant some malware into your GMO tomatoes to keep prying eyes from your secret formula, but these sound like remote scenarios at best.

“Regardless of any practical reason for the research, however, the notion of building a computer attack—known as an “exploit”—with nothing but the information stored in a strand of DNA represented an epic hacker challenge for the University of Washington team.” [emphasis mine]

Here’s an ethical conundrum for me: no practical reason for the research. Do these guys have too much time on their hands (and too much funding)? Are they genuinely hoping to do some good? Or are they doing stuff like this because they can and if it happens to open a can of worms in the process, well, at least we can publish a paper on it? Or maybe it’s just an epic hacker challenge.

So, as radically out there as all this tinkering is, it is safe to say (back to my original point) someone else is or has thought of it too. Could someone Crispr a slice of DNA malware into the human genome to screw with someone’s pacemaker? Or perhaps could it just linger and wreak havoc at some later date? Maybe I’m not smart enough to think of all the horrific or diabolical downsides, but after all it is DNA. I can only imagine that, in light of this new research, someone will come up with a diabolical downside. Therein lies the dilemma.

For me, if you’re tinkering with DNA and you haven’t thought about the diabolical downsides you’re as reckless as a couple of kids skateboarding through speeding traffic. Someone’s going to get hurt. And there’s that word again. Reckless. Why is research money going toward things that have no practical reason? Maybe so that someone, not so kind will come up with one.

Reckless.

Harmless? What do you think?

https://www.wired.com/story/malware-dna-hack/
Bookmark and Share

Heady stuff.

Last week we talked about how some researchers and scientists on the cutting edge are devising guidelines to attempt to ensure that potentially transformative technologies (like AI) remain safe and beneficial, rather than becoming a threat to humanity. And then, there were industries (like nanotech) that have already blown past any attempt at a meaningful review and now exist in thousands of consumer products, and nobody knows if their safe and the companies who produce them don’t even have to tell us they are part of the composition.

This week I’m going to talk about why I look askance at transformative technologies. Maybe it is because I am a writer at heart. Fiction, specifically science fiction, has captured my attention since childhood. It is my genre of choice. Now that nearly all of the science-based science fiction is no longer fiction, our tendency is to think that the only thing left to do is react or adapt. I can understand this since you can’t isolate a single technology as a thing, you can’t identify precisely from where it started, or how it morphed into what it is. Technologies converge, and they become systems, and systems are dauntingly complex. As humans, we create things that become systems. Even in non-digital times, the railroad ushered in a vastly complex system so much so that we had to invent other things just to deal with it, like a clock. What good was a train if it wasn’t on time? And what good was your time if it wasn’t the same as my time?

Fast forward. Does the clock have any behavioral effect in your life?

My oft-quoted scholars at ASU, Allenby, and Sarewitz see things like trains as level one technologies. They spawn systems in the level two realm that are often far more intricate than figuring out how to get this train contraption to run on rails across the United States.

So the nature of convergence and the resulting complexity of systems is one reason for my wariness of transformative tech.Especially now, that we are building things and we don’t understand how they work. We are inventing things that don’t need us to teach them, and that means that we can’t be sure what they are learning or how. If we can barely understand the complexity of the system that has grown up around the airline industry (which we at one time inherently grasped), how are we going to understand systems that spring up around these inventions that, at the core, we know what they do, but don’t know how?

The second reason is human nature. Your basic web dictionary defines the sociology of human nature as: “[…]the character of human conduct, generally regarded as produced by living in primary groups.” Appreciating things like love and compassion, music and art, consciousness, thought, languages and memory are characteristics of human nature. So are evil and vice, violence and hatred, the quest for power and greed. The latter have a tendency to undermine our inventions for good. Sometimes they are our downfall.

With history as our teacher, if we go blindly forward paying little attention to reason one, the complexity of systems, or reason two, the potential for bad actors, or both, that does not bode well.

I’ve been rambling a bit, so I have to wrap this up. I’ve taken a long way around to say that if you are among those who look at all this tech, and the unimaginable scope of the systems we have created and that the only thing left to do is react or adapt, that this is not the case.

While I can see the dark cloud behind every silver lining, it enables me to bring an umbrella on occasion.

Paying attention to the seemingly benign and insisting on a meaningful review of that which we don’t fully understand is the first step. It may seem as though it will be easier to adapt, but I don’t think so.

I guess that’s the reason behind this blog, behind my graphic novel, and my ongoing research and activism through design fiction. If you’re not paying attention, then I’ll remind you.

Bookmark and Share

Future proof.

 

There is no such thing as future proof anything, of course, so I use the term to refer to evidence that a current idea is becoming more and more probable of something we will see in the future. The evidence I am talking about surfaced in a FastCo article this week about biohacking and the new frontier of digital implants. Biohacking has a loose definition and can reference using genetic material without regard to ethical procedures, to DIY biology, to pseudo-bioluminescent tattoos, to body modification for functional enhancement—see transhumanism. Last year, my students investigated this and determined that a society willing to accept internal implants was not a near-future scenario. Nevertheless, according to FastCo author Steven Melendez,

“a survey released by Visa last year that found that 25% of Australians are ‘at least slightly interested’ in paying for purchases through a chip implanted in their bodies.”

Melendez goes on to describe a wide variety of implants already in use for medical, artistic and personal efficiency and interviews Tim Shank, president of a futurist group called TwinCities+. Shank says,

“[For] people with Android phones, I can just tap their phone with my hand, right over the chip, and it will send that information to their phone..”

implants
Amal Graafstra’s Hands [Photo: courtesy of Amal Graafstra] c/o WIRED
The popularity of body piercings and tattoos— also once considered as invasive procedures—has skyrocketed. Implantable technology, especially as it becomes more functionally relevant could follow a similar curve.

I saw this coming some years ago when writing The Lightstream Chronicles. The story, as many of you know, takes place in the far future where implantable technology is mundane and part of everyday life. People regulate their body chemistry access the Lightstream (the evolved Internet) and make “calls” using their fingertips embedded with Luminous Implants. These future implants talk directly to implants in the brain, and other systemic body centers to make adjustments or provide information.

An ad for Luminous Implants, and the "tap" numbers for local attractions.
An ad for Luminous Implants, and the “tap” numbers for local attractions.
Bookmark and Share

When the stakes are low, mistakes are beneficial. In more weighty pursuits, not so much.

 

I’m from the old school. I suppose, that sentence alone makes me seem like a codger. Let’s call it the eighties. Part of the art of problem solving was to work toward a solution and get it as tight as we possibly could before we committed to implementation. It was called the design process and today it’s called “design thinking.” So it was heresy to me when I found myself, some years ago now, in a high-tech corporation where this was a doctrine ignored. I recall a top-secret, new product meeting in which the owner and chief technology officer said, “We’re going to make some mistakes on this, so let’s hurry up and make them.” He was not speaking about iterative design, which is part and parcel of the design process, he was talking about going to market with the product and letting the users illuminate what we should fix. Of course, the product was safe and met all the legal standards, but it was far from polished. The idea was that mass consumer trial-by-fire would provide us with an exponentially higher data return than if we tested all the possible permutations in a lab at headquarters. He was, apparently, ahead of his time.

In a recent FastCo article on Facebook’s race to be the leader in AI, author Daniel Terdiman cites some of Mark Zuckerberg’s mantras: “‘Move fast and break things,’ or ‘Done is better than perfect.’” We can debate this philosophically or maybe even ethically, but it is clearly today’s standard procedure for new technologies, new science and the incessant race to be first. Here is a quote from that article:

“Artificial intelligence has become a vital part of scaling Facebook. It’s already being used to recognize the faces of your friends in photographs, and curate your newsfeed. DeepText, an engine for reading text that was unveiled last week, can understand “with near-human accuracy” the content in thousands of posts per second, in more than 20 different languages. Soon, the text will be translated into a dozen different languages, automatically. Facebook is working on recognizing your voice and identifying people inside of videos so that you can fast forward to the moment when your friend walks into view.”

The story goes on to say that Facebook, though it is pouring tons of money into AI, is behind the curve, having begun only three or so years ago. Aside from the fact that FB’s accomplishments seem fairly impressive (at least to me), people like Google and Microsoft are apparently way ahead. In the case of Microsoft, the effort began more than twenty years ago.

Today, the hurry up is accelerated by open sourcingWikipedia explains the benefits of open sourcing as:

“The open-source model, or collaborative development from multiple independent sources, generates an increasingly more diverse scope of design perspective than any one company is capable of developing and sustaining long term.”

The idea behind open sourcing is that the mistakes will happen even faster along with the advancements. It is becoming the de facto approach to breakthrough technologies. If fast is the primary, maybe even the only goal, it is a smart strategy. Or is it a touch short sighted? As we know, not everyone who can play with the code that a company has given them has that company’s best interests in mind. As for the best interests of society, I’m not sure those are even on the list.

To examine our motivations and the ripples that emanate from them, of course, is my mission with design fiction and speculative futures. Whether we like it or not, a by-product of technological development—aside from utopia—is human behavior. There are repercussions from the things we make and the systems that evolve from them. When your mantra is “Move fast and break things,” that’s what you’ll get. But there is certainly no time the move-fast loop to consider the repercussions of your actions, or the unexpected consequences. Consequences will appear all by themselves.

The technologists tell us that when we reach the holy grail of AI (whatever that is), we will be better people and solve the world’s most challenging problems. But in reality, it’s not that simple. With the nuances of AI, there are potential problems, or mistakes, that could be difficult to fix; new predicaments that humans might not be able to solve and AI may not be inclined to resolve on our behalf.

In the rush to make mistakes, how grave will they be? And, who is responsible?

Bookmark and Share