Tag Archives: privacy

Surveillance. Are we defenseless?

Recent advancements in AI that are increasing exponentially (in areas such as facial recognition) demonstrate a level of sophistication in surveillance that renders most of us indefensible. There is a new transparency, and virtually every global citizen is a potential microbe for scrutiny beneath the microscope. I was blogging about this before I ever set eyes on the CBS drama Person of Interest, but the premise that surveillance could be ubiquitous is very real. The series depicts a mega, master computer that sees everything, but the idea of gathering a networked feed of the world’s cameras and a host of other accessible devices into a central data facility where AI sorts, analyzes and learns what kind of behavior is potentially threatening, is well within reach. It isn’t even a stretch that something like it already exists.

As with most technologies, however, they do not exist in a vacuum. Technologies converge. Take, for example, a recent article in WIRED about how accurate facial recognition is becoming even when the subject is pixelated or blurred. A common tactic to obscure the identity of video witness or an innocent bystander is to blur or to pixelate their face; a favored technique of Google Maps. Just go to any big city street view and Google has systematically obscured license plates and faces. Today these methods no longer compete against state-of-the-art facial recognition systems.

The next flag is the escalating sophistication of hacker technology. One of the most common methods is malware. Through an email or website, malware can infect a computer and raise havoc. Criminals often use it to ransom a victim’s computer before removing the infection. But not all hackers are criminals, per se. The FBI is pushing for the ability to use malware to digital wiretap or otherwise infiltrate potentially thousands of computers using only a single warrant. Ironically, FBI Director James Comey recently admitted that he puts tape over the camera on his personal laptop. I wrote about this a few weeks back What does that say about the security of our laptops and devices?

Is the potential for destructive attacks on our devices is so pervasive that the only defense we have is duct tape? We can track as far back as Edward Snowden, the idea that the NSA can listen in on your phone even when it’s off. And since 2014, experts have confirmed that the technology exists. In fact, albeit sketchy, some apps purport to do exactly that. You won’t find them in the app store (for obvious reasons), but there are websites where you can click the “buy” button. According to the site Stalkertools.com, which doesn’t pass the legit news site test, (note the use of awesome) one these apps promises that you can:

• Record all phone calls made and received, hear everything being said because you can record all calls and even listen to them at a later date.
• GPS Tracking, see on a map on your computer, the exact location of the phone
• See all sites that are opened on the phone’s web browser
• Read all the messages sent and received on IM apps like Skype, Whatsapp and all the rest
• See all the passwords and logins to sites that the person uses, this is thanks to the KeyLogger feature.
• Open and close apps with the awesome “remote control” feature
• Read all SMS messages and see all photos send and received on text messages
• See all photos taken with the phone’s camera

“How it work” “ The best monitoring for protect family” — Yeah. Sketchy.
“How it work” “ The best monitoring for protect family” — Sketchy, you think?

I visited one of these sites (above) and, frankly, I would never click a button on a website that can’t form a sentence in English, and I would not recommend that you do either. Earlier this year, the UK Independent published an article where Kelli Burns, a mass communication professor at the University of South Florida, alleged that Facebook regularly listens to users phone conversations to see what people are talking about. Of course, she said she can’t be certain of that.

Nevertheless, it’s out there, and if it has not already happened eventually, some organization or government will find a way to network the access points and begin collecting information across a comprehensive matrix of data points. And, it would seem that we will have to find new forms of duct tape to attempt to manage whatever privacy we have left. I found a site that gives some helpful advice for determining whether someone is tapping your phone.

Good luck.

 

Bookmark and Share

Invalid?

In a scene from the 2007 movie Gattaca, co-star Uma Thurman steals a follicle of hair from of love-interest Ethan Hawke and takes it to the local DNA sequencing booth (presumably they’re everywhere, like McDonald’s) to find out if Hawke’s DNA is worthy of her affections. She passes the follicle in a paper thin wrapper through a pass-through window as if she were buying a ticket for a movie. The attendant asks, “You want a full sequence?” Thurman confirms, and then waits anxiously. Meanwhile, others step up to windows to submit their samples. A woman who just kissed her boyfriend has her lips swabbed and assures the attendant that the sample is only a couple of minutes old. In about a minute, Thurman receives a plastic tube with the results rolled up inside. Behind the glass, a voice says, “Nine point three. Quite a catch!”

 

In the futuristic society depicted in the movie, humans are either “valid” or “invalid.” Though discrimination based on your genetic profile is illegal and referred to as “genoism,” it is widely known to be a distinguishing factor in employment, promotion, and finding the right soul-mate.

Enter the story of Illumina, which I discovered by way of a FastCompany article earlier this week. Illumina is a hardware/software company. One might imagine them as the folks who make the fictitious machines behind the DNA booths in a science fiction future. Except they are already making them now. The company, which few of us have ever heard of, has 5,000 employees and more than $2 billion in annual revenues. Illumina’s products are selling like hotcakes, in both the clinical and consumer spheres.

“Startups have already entered the clinical market with applications for everything from “liquid biopsy” tests to monitor late-stage cancers (an estimated $1 billion market by 2020, according to the business consulting firm Research and Markets), to non-invasive pregnancy screenings for genetic disorders like Down Syndrome ($2.4 billion by the end of 2022).”

According to FastCo,

“Illumina has captured more than 70% of the sequencing market with these machines that it sells to academics, pharmaceutical companies, biotech companies, and more.”

You and I can do this right now. Companies like Ancestry.com and 23andMe will work up a profile of your DNA from a little bit of saliva and sent through the mail. In a few weeks after submitting your sample, these companies will send you a plethora of reports on your carrier status (for passing on inherited conditions), ancestry reports that track your origins, wellness reports, such as your propensity to be fat or thin, and your traits like blue eyes or a unibrow. All of this costs about $200. Considering that sequencing DNA on this scale was a pipe dream ten years ago, it’s kind of a big deal. They don’t sequence everything; that requires one of Illumina’s more sophisticated machines and costs about $3,000.

If you put this technology in the context of my last post about exponential technological growth. Then it is easy to see that the price of machines, the speed of analysis, and the cost of a report is only going to come down, and faster than we think. At this point, everything will be arriving faster than we think. Here, if only to get your attention, I ring the bell. Illumina is investing in companies that bring this technology to your smartphone. With one company, Helix, “A customer might check how quickly they metabolize caffeine via an app developed by a nutrition company. Helix will sequence the customers’ genomic data and store it centrally, but the nutrition company delivers the report back to the user.” The team from Helix, “[…]that the number of people who have been sequenced will drastically increase […]that it will be 90% of people within 20 years.” (So, probably ten years is a better guess.)

According to the article, the frontier for genomics is expanding.

“What comes next is writing DNA, and not just reading it. Gene-editing tools like CRISPR-Cas9 are making it cheaper and faster to move genes around, which has untold consequences for changing the environment and treating disease.”

CRISPR can do a lot more than that.

But, as usual, all of these developments focus on the bright side, the side that saves lives and not the uncomfortable or unforeseen. There is the potential that you DNA will determine your insurance rates, or even if you get insurance. Toying around with these realms, it is not difficult to imagine that you can “Find anyone’s DNA,” like you can find anybody’s address or phone number. Maybe we see this feature incorporated into dating sites. You won’t have to steal a hair follicle from your date; it will already be online, and if they don’t publish it, certainly people will ask, “What do you have to hide?”

And then there’s the possibility that your offspring might inherit an unfavorable trait, like that unibrow or maybe Down Syndrome. So maybe those babies will never be born, or we’ll use CRISPER to make sure the nose is straight, the eyes are green, the skin is tan, and the IQ is way up there. CRISPER gene editing and splicing will be expensive, of course. Some will be able to afford it. The rest? Well, they’ll have to find a way to love their children flaws and all. So here are my questions? Will this make us more human or less human? Will our DNA become just another way to judge each other on how smart, or thin, or good looking, or talented? Is it just another way to distinguish between the haves and have-nots?

If the apps are already in design, Uma Thurman may not have long to wait.

 

Bookmark and Share

Transcendent Plan

 

One of my oft-quoted sources for future technology is Ray Kurzweil. A brilliant technologist, inventor, and futurist, Kurzweil seems to see it all very clearly, almost as though he were at the helm personally. Some of Kurzweil’s theses are crystal clear for me, such as an imminent approach toward the Singularity in a series of innocuous, ‘seemingly benign,’ steps. I also agree with his Law of Accelerating Returns1 which posits that technology advances exponentially. In a recent interview with the Silicon Valley Business Journal, he nicely illustrated that idea.

“Exponentials are quite seductive because they start out sub-linear. We sequenced one ten-thousandth of the human genome in 1990 and two ten-thousandths in 1991. Halfway through the genome project, 7 ½ years into it, we had sequenced 1 percent. People said, “This is a failure. Seven years, 1 percent. It’s going to take 700 years, just like we said.” Seven years later it was done, because 1 percent is only seven doublings from 100 percent — and it had been doubling every year. We don’t think in these exponential terms. And that exponential growth has continued since the end of the genome project. These technologies are now thousands of times more powerful than they were 13 years ago, when the genome project was completed.”

Kurzweil says the same kinds of leaps are approaching for solar power, resources, disease, and longevity. Our tendency to think linear instead of exponential means that we can deceive ourselves into believing that technologies that, ‘just aren’t there yet,’ are ‘a long way off.’ In reality, they may be right around the corner.

I’m not as solid in my affirmation of Kurzweil (and others) when it comes to some of his other predictions. Without reading too much between the lines, you can see that there is a philosophy that is helping to drive Kurzweil. Namely, he doesn’t want to die. Of course, who does? But his is a quest to deny death on a techno-transcendental level. Christianity holds that eternal life awaits the believer in Jesus Christ, other religions are satisfied that our atoms return to the greater cosmos, or that reincarnation is the next step. It would appear that Kurzweil has no time for faith. His bet on science and technology. He states,

“I think we’re very much on track to have human-level AI by 2029, which has been my consistent prediction for 20 years, and then to be able to send nanobots into the brain in the 2030s and connect our biological neocortex to synthetic neocortex in the cloud.”

In the article mentioned above, Kurzweil states that his quest to live forever is not just about the 200-plus supplements that he takes daily. He refers to this as “Bridge One.” Bridge One buys us time until technology catches up. Then “Bridge Two,” the “biotechnology revolution” takes over and radically extends our life. If all else fails, our mind will be uploaded to Cloud (which will have evolved to a synthetic neocortex), though it remains to be seen whether the sum-total of a mind also equals consciousness in some form.

For many who struggle with the idea of death, religious or not, I wonder if when we dissect it, it is not the fear of physical decrepitude that scares us, but the loss of consciousness; that unique ability of humans to comprehend their world, share language and emotions, to create and contemplate?

I would pose that it is indeed that consciousness that makes us human (along with the injustice at the thought that we feel that we might lose it. It would seem that transcendence is in order. In one scenario this transcendence comes from God, in another ‘we are as Gods.’2

So finally, I wonder whether all of these small, exponentially replicating innovations—culminating to the point where we are accessing Cloud-data only by thinking, or communicating via telepathy, or writing symphonies for eternity—will make us more or less human. If we decide that we are no happier, no more content or fulfilled, is there any going back?

Seeing as it might be right around the corner, we might want to think about these things now rather than later.

 

1. Kurzweil, R. (2001) The Law of Accelerating Returns, KurzweilAI . Kurzweil AI. Available at: http://www.kurzweilai.net/the-law-of-accelerating-returns (Accessed: October 10, 2015). 
2. Brand, Stewart. “WE ARE AS GODS.” The Whole Earth Catalog, September 1968, 1-58. Accessed May 04, 2015. http://www.wholeearth.com/issue/1010/article/195/we.are.as.gods.
Bookmark and Share

Did we design this future?

 

I have held this discussion before, but a recent video from FastCompany reinvigorates a provocative aspect concerning our design future, and begs the question: Is it a future we’ve designed? It centers around the smartphone. There are a lot of cool things about our smartphones like convenience, access, connectivity, and entertainment, just to name a few. It’s hard to believe that Steve Jobs introduced the very first iPhone just nine years ago on June 29, 2007. It was an amazing device, and it’s no shocker that it took off like wildfire. According to stats site Statista, “For 2016, the number of smartphone users is forecast to reach 2.08 billion.” Indeed, we can say, they are everywhere. In the world of design futures, the smartphone becomes Exhibit A of how an evolutionary design change can spawn a complex system.

Most notably, there are the billions of apps that are available to users that promise a better way to calculate tips, listen to music, sleep, drive, search, exercise, meditate, or create. Hence, there is a gigantic network of people who make their living supplying user services. These are direct benefits to society and commerce. No doubt, our devices have also often saved us countless hours of analog work, enabled us to manage our arrivals and departures, and keep in contact (however tenuous) with our friends and acquaintances. Smartphones have helped us find people in distress and help us locate persons with evil intent. But, there are also unintended consequences, like legislation to keep us from texting and driving because these actions have also taken lives. There are issues with dependency and links to sleep disorders. Some lament the deterioration of human, one-on-one, face-to-face, dialog and the distracted conversations at dinner or lunch. There are behavioral disorders, too. Since 2010 there has been a Smartphone Addiction Rating Scale (SARS) and the Young Internet Addiction Scale (YIAS). Overuse of mobile phones has prompted dozens of studies into adolescents as well as adults, and there are links to increased levels of ADHD, and a variety of psychological disorders including stress and depression.

So, while we rely on our phones for all the cool things they enable us to do we are—in less than ten years—experiencing a host of unintended consequences. One of these is privacy. Whether Apple or another brand, the intricacies of smartphone technology are substantially the same. This video shows why your phone is so easy to hack, to activate your phone’s microphone, camera, access your contact list or track your location. And, with the right tools, it is frighteningly simple. What struck me most after watching the video was not how much we are at risk of being hacked, eavesdropped on, or perniciously viewed, but the comments from a woman on the street. She said, “I don’t have anything to hide.” It is not the first millennial that I have heard say this. And that is what, perhaps, bothers me most—our adaptability based on the slow incremental erosion of what used to be our private space.

We can’t rest responsibility entirely on the smartphone. We have to include the idea of social media going back to the days of (amusingly) MySpace. Sharing yourself with a group of close friends gradually gave way to the knowledge that the photo or info may also get passed along to complete strangers. It wasn’t, perhaps your original intention, but, oh well, it’s too late now. Maybe that’s when we decided that we had better get used to sharing our space, our photos (compromising or otherwise), our preferences, our adventures and misadventures with outsiders, even if they were creeps trolling for juicy tidbits. As we chalked up that seemingly benign modification of our behavior to adaptability, the first curtain fell. If someone is going to watch me, and there’s nothing I can do about it, then I may as well get used to it. We adjusted as a defense mechanism. Paranoia was the alternative, and no one wants to think of themselves as paranoid.

A few weeks ago, I posted an image of Mark Zuckerberg’s laptop with tape over the camera and microphone. Maybe he’s more concerned with privacy since his world is full of proprietary information. But, as we become more accustomed to being both constantly connected and potentially tracked or watched, when will the next curtain fall? If design is about planning, directing or focusing, then the absence of design would be ignoring, neglecting or turning away. I return to the first question in this post: Did we design this future? If not, what did we expect?

Bookmark and Share

Privacy or paranoia?

 

If you’ve been a follower of this blog for a while, then you know that I am something of a privacy wonk. I’ve written about it before (about a dozen times, such as), and I’ve even built a research project (that you can enact yourself) around it. A couple of things transpired this week to remind me that privacy is tenuous. (It could also be all the back episodes of Person of Interest that I’ve been watching lately or David Staley’s post last April about the Future of Privacy.) First, I received an email from a friend this week alerting me to a little presumption that my software is spying on me. I’m old enough to remember when you purchased software on as a set of CDs (or even disks). You loaded in on your computer, and it seemed to last for years before you needed to upgrade. Let’s face it, most of us use only a small subset of the features in our favorite applications. I remember using Photoshop 5 for quite awhile before upgrading and the same with the rest of what is now called the Adobe Creative Suite. I still use the primary features of Photoshop 5, Illustrator 10 and InDesign (ver. whatever), 90% of the time. In my opinion, the add-ons to those apps have just slowed things down, and of course, the expense has skyrocketed. Gone are the days when you could upgrade your software every couple of years. Now you have to subscribe at a clip of about $300 a year for the Adobe Creative Suite. Apparently, the old business model was not profitable enough. But then came the Adobe Creative Cloud. (Sound of an angelic chorus.) Now it takes my laptop about 8 minutes to load into the cloud and boot up my software. Plus, it stores stuff. I don’t need it to store stuff for me. I have backup drives and archive software to do that for me.

Back to the privacy discussion. My friend’s email alerted me to this little tidbit hidden inside the Creative Cloud Account Manager.

Learn elsewhere, please.
Learn elsewhere, please.

Under the Security and Privacy tab, there are a couple of options. The first is Desktop App Usage. Here, you can turn this on or off. If it’s on, one of the things it tracks is,

“Adobe feature usage information, such as menu options or buttons selected.”

That means it tracks your keystrokes. Possibly this only occurs when you are using that particular app, but uh-uh, no thanks. Switch that off. Next up is a more pernicious option; it’s called, Machine Learning. Hmm. We all know what that is and I’ver written about that before, too. Just do a search. Here, Adobe says,

“Adobe uses machine learning technologies, such as content analysis and pattern recognition, to improve our products and services. If you prefer that Adobe not analyze your files to improve our products and services, you can opt-out of machine learning at any time.”

Hey, Adobe, if you want to know how to improve your products and services, how about you ask me, or better yet, pay me to consult. A deeper dive into ‘machine learning’ tells me more. Here are a couple of quotes:

“Adobe uses machine learning technologies… For example, features such as Content-Aware Fill in Photoshop and facial recognition in Lightroom could be refined using machine learning.”

“For example, we may use pattern recognition on your photographs to identify all images of dogs and auto-tag them for you. If you select one of those photographs and indicate that it does not include a dog, we use that information to get better at identifying images of dogs.”

Facial recognition? Nope. Help me find dog pictures? Thanks, but I think I can find them myself.

I know how this works. The more data that the machine can feed on the better it becomes at learning. I would just rather Adobe get their data by searching it out for it themselves. I’m sure they’ll be okay. (Afterall there’s a few million people who never look at their account settings.) Also, keep in mind, it’s their machine not mine.

The last item on my privacy rant just validated my paranoia. I ran across this picture of Facebook billionaire Mark Zuckerberg hamming it up for his FB page.

zuckerberg copy

 

In the background, is his personal laptop. Upon closer inspection, we see that Zuck has a piece of duct tape covering his laptop cam and his dual microphones on the side. He knows.

zuckcloseup

 

Go get a piece of tape.

Bookmark and Share

News from a speculative future.

I  thought I’d take a creative leap today with a bit of future fiction. Here is a short, speculative future news article based on a plausible trajectory of science, tech and social policy.

 
18 November 2018
The Senate passed a bill today that gives the green light to Omzin Corp. and the nation’s largest insurer, the federal government, to require digital monitoring of health care recipients. Omzin Corp. is the manufacturer of a microscopic additive that is slated to become part of all prescription drugs. When ingested, the additive reacts with the stomach acids to transmit a signal to the insurer verifying that patients have taken their medication. A spokesperson for the federal government’s Universal Enrollment Plan (UEP), said that the Omzin additive was approved by the FDA as completely safe and would be a significant benefit to patients who have difficulty keeping track of complicated lists of medications and prescription requirements. The patient then naturally eliminates the tiny additive. A new ACAapp allows health care recipients to track on their mobile phone which drugs they have taken and when. The information is also transmitted to the health care provider. Opponents of the bill state that the legislation is a violation of privacy and tantamount to forced medication of the population. “Some people can avoid drugs through improved diet and exercise, but this bill pre-empts patients who would like to pursue that option,” according to Hub Garner a representative from citizens group MedChoice. “Patients who opt-in to the program will pay less for their insurance than those who defy the order,” said Senate minority leader

Some definite possibilities here for future artifacts, i.e., design fictions. Remember, design fictions are provocations of possible futures for the purposes of discussion and debate. Plausible or outrageous? What do you think?

Bookmark and Share

Who are you?

 

There have been a few articles in the recent datasphere have centered around the pervasive tracking of our online activity from the benign to those bordering on unethical. One was from FastCompany that highlighted some practices that web marketers use to track the folks that visit their sites. The article by Steve Melendez lists a handful of these. They include the basics like first party cookies, and A/B testing, to more invasive methods such as psychological testing (thanks, Facebook) third-party tracking cookies, and differential pricing. The cookie is, of course, the most basic. I use them on this site and on The Lightstream Chronicles to see if anyone is visiting, where they’re coming from and a bunch of other minutiae. Using Google Analytics, I can, for example, see what city or country my readers are coming from, age and sex, whether they are regulars or new visitors, whether they visit via mobile or desktop, Apple or Windows, and if they came to my site by way of referral, where did they originate. Then I know if my ads for the graphic novel are working. I find this harmless. I have no interest in knowing your sexual preference, where you shop, and above all, I’m not selling anything (at least not yet). I’m just looking for more eyeballs. More viewers mean that I’m not wasting my time and that somebody is paying attention. It’s interesting that a couple of months ago the EU internet authorities sent me a snippet of code that I was “required” to post on the LSC site alerting my visitors that I use cookies. Aside from they U.S., my highest viewership is from the UK. It’s interesting that they are aware that their citizens are visiting. Hmm.

I have software that allows me to A/B test which means I could change up something on the graphic novel homepage and see if it gets more reaction than a previous version. But, I barely have the time to publish a new blog or episode much less create different versions and test them. A one-man-show has its limitations.

The rest of the tracking methods highlighted in the above article require a lot of devious programming. Since I have my hands full with the basics, this stuff is way above my pay grade. Even if it wasn’t, I think it all goes a bit too far.

Personally, I deplore most internet advertising. I know that makes me a hypocrite since I use it from time to time to drive traffic to my site. I also realize that it is probably a necessary evil. Sites need revenue, or they can’t pump out the content on which we have come to rely. Unfortunately, the landscape often turns into a melee. Tumblr is a good example. Initially, they integrated their ads into the format of their posts. So as you are scrolling through the content, you see an ad within their signature brand presentation. Cool. Then they started doing separate in-line ads. These looked entirely different from their brand content, and the ads were those annoying things like “Grandma discovers the fountain of youth.” Not cool. Then they introduced this floating ad box that tracks you all the way down the page as you scroll through content. You get no break from it. It’s distracting, and based on the content, it can be horrifying, like Hillary Clinton staring at you for seven minutes. How much can a person take?

And it won't go away.
And it won’t go away.

Since my blog is future oriented, the question arises, what does this have to do with the future? It does. These marketing techniques will only become more sophisticated. Many of them already incorporate artificial intelligence to map your activity and predict your every want and need—maybe even the ones you didn’t think anyone knew you had. Is this an invasion of privacy? If it is, it’s going to get more invasive. And as I’m fond of saying, we need to pay attention to these technologies and practices, now or we won’t have a say in where they end up. As a society, we have to do better than just adapt to whatever comes along. We need to help point them in the right direction from the beginning.

 

Bookmark and Share

“At a certain point…”

 

A few weeks ago Brian Barrett of WIRED magazine reported on an “NEW SURVEILLANCE SYSTEM MAY LET COPS USE ALL OF THE CAMERAS.” According to the article,

“Computer scientists have created a way of letting law enforcement tap any camera that isn’t password protected so they can determine where to send help or how to respond to a crime.”

Barrett suggests that America has 30 million surveillance cameras out there. The above sentence, for me, is loaded. First of all, as with most technological advancements, they are always couched in the most benevolent form. These scientists are going to help law enforcement send help or respond to crimes. This is also the argument that the FBI used to try to force Apple to provide a backdoor to the iPhone. It was for the common good.

If you are like me, you immediately see a giant red flag waving to warn us of the gaping possibility for abuse. However, we can take heart to some extent. The sentence mentioned above also limits law enforcement access to, “any camera that isn’t password protected.” Now the question is: What percentage of the 30 million cameras are password protected? Does it include, for example, more than kennel cams or random weather cams? Does it include the local ATM, traffic, and other security cameras? The system is called CAM2.

“…CAM2 reveals the location and orientation of public network cameras, like the one outside your apartment.”

It can aggregate the cameras in a given area and allow law enforcement to access them. Hmm.

Last week I teased that some of the developments that I reserved for 25, 50 or even further into the future, through my graphic novel The Lightstream Chronicles, are showing signs of life in the next two or three years. A universal “cam” system like this is one of them; the idea of ubiquitous surveillance or the mesh only gets stronger with more cameras. Hence the idea behind my ubiquitous surveillance blog. If there is a system that can identify all of the “public network” cams, how far are we from identifying all of the “private network” cams? How long before these systems are hacked? Or, in the name of national security, how might these systems be appropriated? You may think this is the stuff of sci-fi, but it is also the stuff of design-fi, and design-fi, as I explained last week, is intended to make us think; about how these things play out.

In closing, WIRED’s Barrett raised the issue of the potential for abusing systems such as CAM2 with Gautam Hans, policy counsel at the Center for Democracy & Technology. And, of course, we got the standard response:

“It’s not the best use of our time to rail against its existence. At a certain point, we need to figure out how to use it effectively, or at least with extensive oversight.”

Unfortunately, history has shown that that certain point usually arrives after something goes egregiously wrong. Then someone asks, “How could something like this happen?”

Bookmark and Share

Adapt or plan? Where do we go from here?

I just returned from Nottingham, UK where I presented a paper for Cumulus 16, In This Place. The paper was entitled Design Fiction: A Countermeasure For Technology Surprise. An Undergraduate Proposal. My argument hinged on the idea that students needed to start thinking about our technosocial future. Design fiction is my area of research, but if you were inclined to do so, you could probably choose a variant methodology to provoke discussion and debate about the future of design, what designers do, and their responsibility as creators of culture. In January, I had the opportunity to take an initial pass at such a class. The experiment was a different twist on a collaborative studio where students from the three traditional design specialties worked together on a defined problem. The emphasis was on collaboration rather than the outcome. Some students embraced this while others pushed back. The push-back came from students fixated on building a portfolio of “things” or “spaces” or “visual communications“ so that they could impress prospective employers. I can’t blame them for that. As educators, we have hammered the old paradigm of getting a job at Apple or Google, or (fill in the blank) as the ultimate goal of undergraduate education. But the paradigm is changing and the model of a designer as the maker of “stuff” is wearing thin.

A great little polemic from Cameron Tonkinwise recently appeared that helped to articulate this issue. He points the finger at interaction design scholars and asks why they are not writing about or critiquing “the current developments in the world of tech.” He wonders whether anyone is paying attention. As designers and computer scientists we are feeding a pipeline of more apps with minimal viability, with seemingly no regard for the consequences on social systems, and (one of my personal favorites) the behaviors we engender through our designs.

I tell my students that it is important to think about the future. The usual response is, “We do!” When I drill deeper, I find that their thoughts revolve around getting a job, making a living, finding a home, and a partner. They rarely include global warming, economic upheavals, feeding the world, natural disasters, etc. Why? These issues they view as beyond their control. We do not choose these things; they happen to us. Nevertheless, these are precisely the predicaments that need designers. I would argue these concerns are far more important than another app to count my calories or select the location for my next sandwich.

There is a host of others like Tonkinwise that see that design needs to refocus, but often it seems like there are are a greater number that blindly plod forward unaware of the futures they are creating. I’m not talking about refocusing designers to be better at business or programming languages; I’m talking about making designers more responsible for what they design. And like Tonkinwise, I agree that it needs to start with design educators.

Bookmark and Share

The nature of the unpredictable.

 

Following up on last week’s post, I confessed some concern about technologies that progress too quickly and combine unpredictably.

Stewart Brand introduced the 1968 Whole Earth Catalog with, “We are as gods and might as well get good at it.”1 Thirty-two years later, he wrote that new technologies such as computers, biotechnology and nanotechnology are self-accelerating, that they differ from older, “stable, predictable and reliable,” technologies such as television and the automobile. Brand states that new technologies “…create conditions that are unstable, unpredictable and unreliable…. We can understand natural biology, subtle as it is because it holds still. But how will we ever be able to understand quantum computing or nanotechnology if its subtlety keeps accelerating away from us?”2. If we combine Brand’s concern with Kurzweil’s Law of Accelerating Returns and the current supporting evidence exponentially, as the evidence supports, will it be as Brand suggests unpredictable?

Last week I discussed an article from WIRED Magazine on the VR/MR company Magic Leap. The author writes,

“Even if you’ve never tried virtual reality, you probably possess a vivid expectation of what it will be like. It’s the Matrix, a reality of such convincing verisimilitude that you can’t tell if it’s fake. It will be the Metaverse in Neal Stephenson’s rollicking 1992 novel, Snow Crash, an urban reality so enticing that some people never leave it.”

And it will be. It is, as I said last week, entirely logical to expect it.

We race toward these technologies with visions of mind-blowing experiences or life-changing cures, and usually, we imagine only the upside. We all too often forget the human factor. Let’s look at some other inevitable technological developments.
• Affordable DNA testing will tell you your risk of inheriting a disease or debilitating condition.
• You can ingest a pill that tells your doctor, or you in case you forgot, that you took your medicine.
• Soon we will have life-like robotic companions.
• Virtual reality is affordable, amazingly real and completely user-friendly.

These are simple scenarios because they will likely have aspects that make them even more impressive, more accessible and more profoundly useful. And like most technological developments, they will also become mundane and expected. But along with them come the possibility of a whole host of unintended consequences. Here are a few.
• The government’s universal healthcare requires that citizens have a DNA test before you qualify.
• It monitors whether you’ve taken your medication and issues a fine if you don’t, even if you don’t want your medicine.
• A robotic, life-like companion can provide support and encouragement, but it could also be your outlet for violent behavior or abuse.
• The virtual world is so captivating and pleasurable that you don’t want to leave, or it gets to the point where it is addicting.

It seems as though whenever we involve human nature, we set ourselves up for unintended consequences. Perhaps it is not the nature of technology to be unpredictable; it is us.

1. Brand, Stewart. “WE ARE AS GODS.” The Whole Earth Catalog, September 1968, 1-58. Accessed May 04, 2015. http://www.wholeearth.com/issue/1010/article/195/we.are.as.gods.
2. Brand, Stewart. “Is Technology Moving Too Fast? Self-Accelerating Technologies-Computers That Make Faster Computers, For Example – May Have a Destabilizing Effect on .Society.” TIME, 2000
Bookmark and Share