Tag Archives: mind-reading

Did we design this future?

 

I have held this discussion before, but a recent video from FastCompany reinvigorates a provocative aspect concerning our design future, and begs the question: Is it a future we’ve designed? It centers around the smartphone. There are a lot of cool things about our smartphones like convenience, access, connectivity, and entertainment, just to name a few. It’s hard to believe that Steve Jobs introduced the very first iPhone just nine years ago on June 29, 2007. It was an amazing device, and it’s no shocker that it took off like wildfire. According to stats site Statista, “For 2016, the number of smartphone users is forecast to reach 2.08 billion.” Indeed, we can say, they are everywhere. In the world of design futures, the smartphone becomes Exhibit A of how an evolutionary design change can spawn a complex system.

Most notably, there are the billions of apps that are available to users that promise a better way to calculate tips, listen to music, sleep, drive, search, exercise, meditate, or create. Hence, there is a gigantic network of people who make their living supplying user services. These are direct benefits to society and commerce. No doubt, our devices have also often saved us countless hours of analog work, enabled us to manage our arrivals and departures, and keep in contact (however tenuous) with our friends and acquaintances. Smartphones have helped us find people in distress and help us locate persons with evil intent. But, there are also unintended consequences, like legislation to keep us from texting and driving because these actions have also taken lives. There are issues with dependency and links to sleep disorders. Some lament the deterioration of human, one-on-one, face-to-face, dialog and the distracted conversations at dinner or lunch. There are behavioral disorders, too. Since 2010 there has been a Smartphone Addiction Rating Scale (SARS) and the Young Internet Addiction Scale (YIAS). Overuse of mobile phones has prompted dozens of studies into adolescents as well as adults, and there are links to increased levels of ADHD, and a variety of psychological disorders including stress and depression.

So, while we rely on our phones for all the cool things they enable us to do we are—in less than ten years—experiencing a host of unintended consequences. One of these is privacy. Whether Apple or another brand, the intricacies of smartphone technology are substantially the same. This video shows why your phone is so easy to hack, to activate your phone’s microphone, camera, access your contact list or track your location. And, with the right tools, it is frighteningly simple. What struck me most after watching the video was not how much we are at risk of being hacked, eavesdropped on, or perniciously viewed, but the comments from a woman on the street. She said, “I don’t have anything to hide.” It is not the first millennial that I have heard say this. And that is what, perhaps, bothers me most—our adaptability based on the slow incremental erosion of what used to be our private space.

We can’t rest responsibility entirely on the smartphone. We have to include the idea of social media going back to the days of (amusingly) MySpace. Sharing yourself with a group of close friends gradually gave way to the knowledge that the photo or info may also get passed along to complete strangers. It wasn’t, perhaps your original intention, but, oh well, it’s too late now. Maybe that’s when we decided that we had better get used to sharing our space, our photos (compromising or otherwise), our preferences, our adventures and misadventures with outsiders, even if they were creeps trolling for juicy tidbits. As we chalked up that seemingly benign modification of our behavior to adaptability, the first curtain fell. If someone is going to watch me, and there’s nothing I can do about it, then I may as well get used to it. We adjusted as a defense mechanism. Paranoia was the alternative, and no one wants to think of themselves as paranoid.

A few weeks ago, I posted an image of Mark Zuckerberg’s laptop with tape over the camera and microphone. Maybe he’s more concerned with privacy since his world is full of proprietary information. But, as we become more accustomed to being both constantly connected and potentially tracked or watched, when will the next curtain fall? If design is about planning, directing or focusing, then the absence of design would be ignoring, neglecting or turning away. I return to the first question in this post: Did we design this future? If not, what did we expect?

Bookmark and Share

Privacy or paranoia?

 

If you’ve been a follower of this blog for a while, then you know that I am something of a privacy wonk. I’ve written about it before (about a dozen times, such as), and I’ve even built a research project (that you can enact yourself) around it. A couple of things transpired this week to remind me that privacy is tenuous. (It could also be all the back episodes of Person of Interest that I’ve been watching lately or David Staley’s post last April about the Future of Privacy.) First, I received an email from a friend this week alerting me to a little presumption that my software is spying on me. I’m old enough to remember when you purchased software on as a set of CDs (or even disks). You loaded in on your computer, and it seemed to last for years before you needed to upgrade. Let’s face it, most of us use only a small subset of the features in our favorite applications. I remember using Photoshop 5 for quite awhile before upgrading and the same with the rest of what is now called the Adobe Creative Suite. I still use the primary features of Photoshop 5, Illustrator 10 and InDesign (ver. whatever), 90% of the time. In my opinion, the add-ons to those apps have just slowed things down, and of course, the expense has skyrocketed. Gone are the days when you could upgrade your software every couple of years. Now you have to subscribe at a clip of about $300 a year for the Adobe Creative Suite. Apparently, the old business model was not profitable enough. But then came the Adobe Creative Cloud. (Sound of an angelic chorus.) Now it takes my laptop about 8 minutes to load into the cloud and boot up my software. Plus, it stores stuff. I don’t need it to store stuff for me. I have backup drives and archive software to do that for me.

Back to the privacy discussion. My friend’s email alerted me to this little tidbit hidden inside the Creative Cloud Account Manager.

Learn elsewhere, please.
Learn elsewhere, please.

Under the Security and Privacy tab, there are a couple of options. The first is Desktop App Usage. Here, you can turn this on or off. If it’s on, one of the things it tracks is,

“Adobe feature usage information, such as menu options or buttons selected.”

That means it tracks your keystrokes. Possibly this only occurs when you are using that particular app, but uh-uh, no thanks. Switch that off. Next up is a more pernicious option; it’s called, Machine Learning. Hmm. We all know what that is and I’ver written about that before, too. Just do a search. Here, Adobe says,

“Adobe uses machine learning technologies, such as content analysis and pattern recognition, to improve our products and services. If you prefer that Adobe not analyze your files to improve our products and services, you can opt-out of machine learning at any time.”

Hey, Adobe, if you want to know how to improve your products and services, how about you ask me, or better yet, pay me to consult. A deeper dive into ‘machine learning’ tells me more. Here are a couple of quotes:

“Adobe uses machine learning technologies… For example, features such as Content-Aware Fill in Photoshop and facial recognition in Lightroom could be refined using machine learning.”

“For example, we may use pattern recognition on your photographs to identify all images of dogs and auto-tag them for you. If you select one of those photographs and indicate that it does not include a dog, we use that information to get better at identifying images of dogs.”

Facial recognition? Nope. Help me find dog pictures? Thanks, but I think I can find them myself.

I know how this works. The more data that the machine can feed on the better it becomes at learning. I would just rather Adobe get their data by searching it out for it themselves. I’m sure they’ll be okay. (Afterall there’s a few million people who never look at their account settings.) Also, keep in mind, it’s their machine not mine.

The last item on my privacy rant just validated my paranoia. I ran across this picture of Facebook billionaire Mark Zuckerberg hamming it up for his FB page.

zuckerberg copy

 

In the background, is his personal laptop. Upon closer inspection, we see that Zuck has a piece of duct tape covering his laptop cam and his dual microphones on the side. He knows.

zuckcloseup

 

Go get a piece of tape.

Bookmark and Share

Vision comes from looking to the future.

 

I was away last week, but I left off with a post about proving that some of the things that we current think of as sci-fi or fantasy are not only plausible, but some may even be on their way to reality. In the last post, I was providing the logical succession toward implantable technology or biohacking.

The latest is a robot toy from a company called Anki. Once again, WIRED provided the background on this product, and it is an excellent example of technological convergence which I have discussed many times before. Essentially, “technovergence” is when multiple cutting-edge technologies come together in unexpected and sometimes unpredictable ways. In this case, the toy brings together AI, machine learning, computer vision science, robotics, deep character development, facial recognition, and a few more. According to the video below,

“There have been very few applications where a robot has felt like a character that connects with humans around it. For that, you really need artificial intelligence and robotics. That’s been the missing key.”

According to David Pierce, with WIRED,

“Cozmo is a cheeky gamer; the little scamp tried to fake me into tapping my block when they didn’t match, and stormed off when I won. And it’s those little tics, the banging of its lift-like arm and spinning in circles and squawking in its Wall-E voice, that really makes you want to refer to the little guy as ‘he’ rather than ‘it.’”

What strikes me as especially interesting is that my students designed their own version of this last semester. (I’m pretty sure that they knew nothing about this particular toy.) The semester was a rigorous design fiction class that took a hard look at what was possible in the next five to ten years. For some, the class was something like hell, but the similarities and possibilities that my students put together for their robot are amazingly like Cozmo.

I think this is proof of more than what is possible; it’s evidence that vision comes from looking to the future.

Bookmark and Share

The end of code.

 

This week WIRED Magazine released their June issue announcing the end of code. That would mean that the ability to write code, as is so cherished in the job world right now, is on the way out. They attribute this tectonic shift to Artificial Intelligence, machine learning, neural networks and the like. In the future (which is taking place now) we won’t have to write code to tell computers what to do, we will just have to teach them. I have been over this before through a number of previous writings. An example: Facebook uses a form of machine learning by collecting data from millions of pictures that are posted on the social network. When someone loads a group photo and identifies the people in the shot, Facebook’s AI remembers it by logging the prime coordinates on a human face and attributing them to that name (aka facial recognition). If the same coordinates show up again in another post, Facebook identifies it as you. People load the data (on a massive scale), and the machine learns. By naming the person or persons in the photo, you have taught the machine.

The WIRED article makes some interesting connections about the evolution of our thinking concerning the mind, about learning, and how we have taken a circular route in our reasoning. In essence, the mind was once considered a black box; there was no way to figure it out, but you could condition responses, a la Pavlov’s Dog. That logic changes with cognitive science which is the idea that the brain is more like a computer. The computing analogy caught on, and researchers began to see the whole idea of thought, memory, and thinking as stuff you could code, or hack, just like a computer. Indeed, it is this reasoning that has led to the notion that DNA is, in fact, codable, hence splicing through Crispr. If it’s all just code, we can make anything. That was the thinking. Now there is machine learning and neural networks. You still code, but only to set up the structure by which the “thing” learns, but after that, it’s on its own. The result is fractal and not always predictable. You can’t go back in and hack the way it is learning because it has started to generate a private math—and we can’t make sense of it. In other words, it is a black box. We have, in effect, stymied ourselves.

There is an upside. To train a computer you used to have to learn how to code. Now you just teach it by showing or giving it repetitive information, something anyone can do, though, at this point, some do it better than others.

Always the troubleshooter, I wonder what happens when we—mystified at a “conclusion” or decision arrived at by the machine—can’t figure out how to make it stop arriving at that conclusion. You can do the math.

Do we just turn it off?

Bookmark and Share

Adapt or plan? Where do we go from here?

I just returned from Nottingham, UK where I presented a paper for Cumulus 16, In This Place. The paper was entitled Design Fiction: A Countermeasure For Technology Surprise. An Undergraduate Proposal. My argument hinged on the idea that students needed to start thinking about our technosocial future. Design fiction is my area of research, but if you were inclined to do so, you could probably choose a variant methodology to provoke discussion and debate about the future of design, what designers do, and their responsibility as creators of culture. In January, I had the opportunity to take an initial pass at such a class. The experiment was a different twist on a collaborative studio where students from the three traditional design specialties worked together on a defined problem. The emphasis was on collaboration rather than the outcome. Some students embraced this while others pushed back. The push-back came from students fixated on building a portfolio of “things” or “spaces” or “visual communications“ so that they could impress prospective employers. I can’t blame them for that. As educators, we have hammered the old paradigm of getting a job at Apple or Google, or (fill in the blank) as the ultimate goal of undergraduate education. But the paradigm is changing and the model of a designer as the maker of “stuff” is wearing thin.

A great little polemic from Cameron Tonkinwise recently appeared that helped to articulate this issue. He points the finger at interaction design scholars and asks why they are not writing about or critiquing “the current developments in the world of tech.” He wonders whether anyone is paying attention. As designers and computer scientists we are feeding a pipeline of more apps with minimal viability, with seemingly no regard for the consequences on social systems, and (one of my personal favorites) the behaviors we engender through our designs.

I tell my students that it is important to think about the future. The usual response is, “We do!” When I drill deeper, I find that their thoughts revolve around getting a job, making a living, finding a home, and a partner. They rarely include global warming, economic upheavals, feeding the world, natural disasters, etc. Why? These issues they view as beyond their control. We do not choose these things; they happen to us. Nevertheless, these are precisely the predicaments that need designers. I would argue these concerns are far more important than another app to count my calories or select the location for my next sandwich.

There is a host of others like Tonkinwise that see that design needs to refocus, but often it seems like there are are a greater number that blindly plod forward unaware of the futures they are creating. I’m not talking about refocusing designers to be better at business or programming languages; I’m talking about making designers more responsible for what they design. And like Tonkinwise, I agree that it needs to start with design educators.

Bookmark and Share

The nature of the unpredictable.

 

Following up on last week’s post, I confessed some concern about technologies that progress too quickly and combine unpredictably.

Stewart Brand introduced the 1968 Whole Earth Catalog with, “We are as gods and might as well get good at it.”1 Thirty-two years later, he wrote that new technologies such as computers, biotechnology and nanotechnology are self-accelerating, that they differ from older, “stable, predictable and reliable,” technologies such as television and the automobile. Brand states that new technologies “…create conditions that are unstable, unpredictable and unreliable…. We can understand natural biology, subtle as it is because it holds still. But how will we ever be able to understand quantum computing or nanotechnology if its subtlety keeps accelerating away from us?”2. If we combine Brand’s concern with Kurzweil’s Law of Accelerating Returns and the current supporting evidence exponentially, as the evidence supports, will it be as Brand suggests unpredictable?

Last week I discussed an article from WIRED Magazine on the VR/MR company Magic Leap. The author writes,

“Even if you’ve never tried virtual reality, you probably possess a vivid expectation of what it will be like. It’s the Matrix, a reality of such convincing verisimilitude that you can’t tell if it’s fake. It will be the Metaverse in Neal Stephenson’s rollicking 1992 novel, Snow Crash, an urban reality so enticing that some people never leave it.”

And it will be. It is, as I said last week, entirely logical to expect it.

We race toward these technologies with visions of mind-blowing experiences or life-changing cures, and usually, we imagine only the upside. We all too often forget the human factor. Let’s look at some other inevitable technological developments.
• Affordable DNA testing will tell you your risk of inheriting a disease or debilitating condition.
• You can ingest a pill that tells your doctor, or you in case you forgot, that you took your medicine.
• Soon we will have life-like robotic companions.
• Virtual reality is affordable, amazingly real and completely user-friendly.

These are simple scenarios because they will likely have aspects that make them even more impressive, more accessible and more profoundly useful. And like most technological developments, they will also become mundane and expected. But along with them come the possibility of a whole host of unintended consequences. Here are a few.
• The government’s universal healthcare requires that citizens have a DNA test before you qualify.
• It monitors whether you’ve taken your medication and issues a fine if you don’t, even if you don’t want your medicine.
• A robotic, life-like companion can provide support and encouragement, but it could also be your outlet for violent behavior or abuse.
• The virtual world is so captivating and pleasurable that you don’t want to leave, or it gets to the point where it is addicting.

It seems as though whenever we involve human nature, we set ourselves up for unintended consequences. Perhaps it is not the nature of technology to be unpredictable; it is us.

1. Brand, Stewart. “WE ARE AS GODS.” The Whole Earth Catalog, September 1968, 1-58. Accessed May 04, 2015. http://www.wholeearth.com/issue/1010/article/195/we.are.as.gods.
2. Brand, Stewart. “Is Technology Moving Too Fast? Self-Accelerating Technologies-Computers That Make Faster Computers, For Example – May Have a Destabilizing Effect on .Society.” TIME, 2000
Bookmark and Share

Nine years from now.

 

Today I’m on my soapbox, again, as an advocate of design thinking, of which design fiction is part of the toolbox.

In 2014, the Pew Research Center published a report on Digital Life in 2025. Therein, “The report covers experts’ views about advances in artificial intelligence (AI) and robotics, and their impact on jobs and employment.” Their nutshell conclusion was that:

“Experts envision automation and intelligent digital agents permeating vast areas of our work and personal lives by 2025, (9 years from now), but they are divided on whether these advances will displace more jobs than they create.”

On the upside, some of the “experts” believe that we will, as the brilliant humans that we are, invent new kinds of uniquely human work that we can’t replace with AI—a return to an artisanal society with more time for leisure and our loved ones. Some think we will be freed from the tedium of work and find ways to grow in some other “socially beneficial” pastime. Perhaps we will just be chillin’ with our robo-buddy.

On the downside, there are those who believe that not only blue-collar, robotic, jobs will vanish, but also white-collar, thinking jobs, and that will leave a lot of people out of work since there are only so many jobs as clerks at McDonald’s or greeters at Wal-Mart. They think that some of these are the fault of education for not better preparing us for the future.

A few weeks ago I blogged about people who are thinking about addressing these concerns with something called Universal Basic Income (UBI), a $12,000 gift to everyone in the world since everyone will be out of work. I’m guessing (though it wasn’t explicitly stated) that this money would come from all the corporations that are raking in the bucks by employing the AI’s, the robots and the digital agents, but who don’t have anyone on the payroll anymore. The advocates of this idea did not address whether the executives at these companies, presumably still employed, will make more than $12,000, nor whether the advocates themselves were on the 12K list. I guess not. They also did not specify who would buy the services that these corporations were offering if we are all out of work. But I don’t want to repeat that rant here.

I’m not as optimistic about the unique capabilities of humankind to find new, uniquely human jobs in some new, utopian artisanal society. Music, art, and blogs are already being written by AI, by the way. I do agree, however, that we are not educating our future decision-makers to adjust adequately to whatever comes along. The process of innovative design thinking is a huge hedge against technology surprise, but few schools have ever entertained the notion and some have never even heard of it. In some cases, it has been adopted, but as a bastardized hybrid to serve business-as-usual competitive one-upmanship.

I do believe that design, in its newest and most innovative realizations, is the place for these anticipatory discussions and future. What we need is thinking that encompasses a vast array of cross-disciplinary input, including philosophy and religion, because these issues are more than black and white, they are ethically and morally charged, and they are inseparable from our culture—the scaffolding that we as a society use to answer our most existential questions. There is a lot of work to do to survive ourselves.

 

 

Bookmark and Share

Writing a story that seemingly goes on forever. LSC update.

 

This week I wrapped up the rendering and text for the last episodes of Season 4 of The Lightstream Chronicles. Going back to the original publication calendar that I started in 2012, Chapter 4 was supposed to be 30 some pages. Over the course of production, the page count grew to more than fifty. I think that fifty episodes (pages) are a bit too many for a chapter since it takes almost a year for readers to get through a “season.” If we look at the weekly episodes in a typical TV drama, there are usually less than twenty which is far fewer than even ten years ago. So in retrospect, fifty episodes could have been spread across 2 seasons. The time that it takes to create a page, even from a pre-designed script is one of the challenges in writing and illustrating a lengthy graphic novel. Since the story began, I have had a lot of time to think about my characters, their behavior and my on futuristic prognostications. While this can be good, giving me extra time to refine, clarify or embellish the story, it can also be something of a curse as I look back and wish I had not committed a phrase or image to posterity. Perhaps they call that writer’s remorse. This conundrum also keeps things exciting as I have introduced probably a dozen new characters, scenes, and extensive backstory to the storytelling. Some people might warn that this is a recipe for disaster, but I think that the upcoming changes make the story better, more suspenseful, and engaging.

Since I have added a considerable number of pages and scenes to the beginning of Season 5, the episode count is climbing. It looks as though I going to have to add a Season 7, and possibly a Season 8 before the story finally wraps up.

Bookmark and Share

Sitting around with my robo-muse and collecting a check.

 

Writing a weekly blog can be a daunting task especially amid teaching, research and, of course, the ongoing graphic novel. I can only imagine the challenge for those who do it daily. Thank goodness for friends who send me articles. This week the piece comes from The New York Times tech writer Farhad Manjoo. The article is entitled, “A Plan in Case Robots Take the Jobs: Give Everyone a Paycheck.” The topic follows nicely on the heels of last week’s blog about the inevitability of robot-companions. Unfortunately, both the author and the people behind this idea appear to be woefully out of touch with reality.

Here is the premise: After robots and AI have become ubiquitous and mundane, what will we do with ourselves? “How will society function after humanity has been made redundant? Technologists and economists have been grappling with this fear for decades, but in the last few years, one idea has gained widespread interest — including from some of the very technologists who are now building the bot-ruled future,” asks Manjoo.

The answer, strangely enough, seems to be coming from venture capitalists and millionaires like Albert Wenger, who is writing a book on the idea of U.B. I. — universal basic income — and Sam Altman, president of the tech incubator Y Combinator. Apparently, they think that $1,000 a month would be about right, “…about enough to cover housing, food, health care and other basic needs for many Americans.”

This equation, $12,000 per year, possibly works for the desperately poor in rural Mississippi. Perhaps it is intended for some 28-year-old citizen with no family or social life. Of course, there would be no money for that iPhone, or cable service. Such a mythical person has a $300 rent controlled apartment (utilities included), benefits covered by the government, doesn’t own a car, or buy gas or insurance, and then maybe doesn’t eat either. Though these millionaires clearly have no clue about what it costs the average American to eek out a living, they have identified some other fundamental questions:

“When you give everyone free money, what do people do with their time? Do they goof off, or do they try to pursue more meaningful pursuits? Do they become more entrepreneurial? How would U.B.I. affect economic inequality? How would it alter people’s psychology and mood? Do we, as a species, need to be employed to feel fulfilled, or is that merely a legacy of postindustrial capitalism?”

The Times article continues with, “Proponents say these questions will be answered by research, which in turn will prompt political change. For now, they argue the proposal is affordable if we alter tax and welfare policies to pay for it, and if we account for the ways technological progress in health care and energy will reduce the amount necessary to provide a basic cost of living.”

Often, the people that float ideas like this paint them as utopia, but I have a couple of additional questions. Why are venture capitalists interested in this notion? Will they also reduce their income to $1,000 per month? Seriously, that never happens. Instead, we see progressives in government and finance using an equation like this: “One thousand for you. One hundred thousand for me. One thousand for you. One hundred thousand for me…”

Fortunately, it is an unlikely scenario, because it would not move us toward equality but toward a permanent under-class forever dependent on those who have. Scary.

Bookmark and Share

Your robot buddy is almost ready.

 

There is an impressive video out this week showing a rather adept robot going through the paces, so to speak, getting smacked down and then getting back up again. The year is 2016, and the technology is striking. The company is Boston Dynamics. One thing you know from reading this blog is that I ascribe to The Law of Accelerating Returns. So, as we watch this video, if we want to be hyper-critical we can see that the bot still needs some shepherding, tentatively handles the bumpy terrain, and is slow to get up after a violent shove to the floor. On the other hand, if you are at all impressed with the achievement, then you know that the people working on this will only make it better, agiler, more svelt, and probably faster on the comeback. Let’s call this ingredient one.

 

Last year I devoted several blogs to the advancement of AI, about the corporations rushing to be the go-to source for the advanced version of predictive behavior and predictive decision-making. I have also discussed systems like Amazon Echo that use the advanced Alexa Voice Service. It’s something like Siri on steroids. Here’s part of Amazon’s pitch:

“• Hears you from across the room with far-field voice recognition, even while music is playing
• Answers questions, reads audiobooks and the news, reports traffic and weather, gives info on local businesses, provides sports scores and schedules, and more with Alexa, a cloud-based voice service
• Controls lights, switches, and thermostats with compatible WeMo, Philips Hue, Samsung SmartThings, Wink, Insteon, and ecobee smart home devices
•Always getting smarter and adding new features and skills…”

You’re supposed to place it in a central position in the home where it is proficient at picking up your voice from the other room. Just don’t go too far. The problem with Echo is that its stationery. Call Echo ingredient two. What Echo needs is ingredient one.

Some of the biggest players in the AI race right now are Google, Amazon, Apple, Facebook, IBM, Elon Musk, and the military, but the government probably won’t give you a lot of details on that. All of these billion-dollar companies have a vested interest in machine learning and predictive algorithms. Not the least of which is Google. Google already uses Voice Search, “OK, Google.” to enable type-free searching. And their AI algorithm software TensorFlow has been open-sourced. The better that a machine can learn, the more reliable the Google autonomous vehicle will be. Any one of these folks could be ingredient three.

I’m going to try and close the loop on this, at least for today. Guess who bought Boston Dynamics last year? That would be corporation X, formerly known as Google X.

 

Bookmark and Share