Category Archives: Futurist Topics

Election lessons. Beware who you ignore.

It was election week here in America, but unless you’ve been living under a rock for the last eight months, you already know that. Not unlike the Brexit vote from earlier this year, a lot of people were genuinely surprised by the outcome. Perhaps most surprising to me is that the people who seem to be the most surprised are the people who claimed to know—for certain—that the outcome would be otherwise. Why do you suppose that is? There is a lot of finger-pointing and head-scratching going on but from what I’ve seen so far none of these so-called experts has a clue why they were wrong.

Most of them are blaming polls for their miscalculations. And it’s starting to look like their error came not in who they polled but who they thought irrelevant and ignored. Many in the media are in denial that their efforts to shape the election may have even fueled the fire for the underdog. What has become of American Journalism is shameful. Wikileaks proves that ninety percent of the media was kissing up to the left, with pre-approved interviews, stories and marching orders to “shape the narrative.” I don’t care who you were voting for, that kind of collusion is a disgrace for democracy. Call it Pravda. But I don’t want to turn this blog into a political commentary, but it was amusing to watch them all wearing the stupid hat on Wednesday morning. What I do want to talk about, however, is how we look at data to reach a conclusion.

In a morning-after article from the LinkedIn network, futurist Peter Diamandis posted the topic, “Here’s what election campaign marketing will look like in 2020.” It was less about the election and more about future tech with an occasional reference to the election and campaign processes. He has five predictions. First is, the news flash that “Social media will have continued to explode. [and that] The single most important factor influencing your voting decision is your social network.” Diamandis says that “162 million people log onto Facebook at least once a month.” I agree with the first part of his statement but what about the people the other 50% and those that don’t share their opinions on politics. A lot of pollsters are looking at the huge disparity in projections vs. actuals in the 2016 election. They are acknowledging that a lot of people simply weren’t forthcoming in pre-election polling. Those planning to vote Trump, for example, knew that Trump was a polarizing figure and they weren’t going to get into it with their friends on social media or even a stranger taking a poll. Then, I’m willing to bet that a lot of voters who put the election over the top are in the fifty percent that isn’t on social media. Just look at the demographics for social media.

Peter Diamandis is a brilliant guy, and I’m not here to pick on him. Many of his predictions are quite conceivable. Mostly he’s talking about an increase in data mining, and AI is getting better at learning from it, with a laser focus on the individual. If you add this together with programmable avatars, facial recognition improvements and the Internet of Things, the future means that we are all going to be tracked with increasing levels of detail. And though our face is probably not something we can keep secret, if it all creeps you out, remember that much of this is based on what we choose to share. Fortunately, it will take a little bit longer than 2020 for all of these new technologies to read our minds—so until then we still hold the cards. As long as you don’t share our most private thoughts on social media or with pollsters, you’ll keep them guessing.

Bookmark and Share

Yes, you too can be replaced.

Over the past weeks, I have begun to look at the design profession and design education in new ways. It is hard to argue with the idea that all design is future-based. Everything we design is destined for some point beyond now where the thing or space, the communication, or the service will exist. If it already existed, we wouldn’t need to design it. So design is all about the future. For most of the 20th century and the last 16 years, the lion’s share of our work as designers has focused primarily on very near-term, very narrow solutions: A better tool, a more efficient space, a more useful user interface or satisfying experience. In fact, the tighter the constraints, the narrower the problem statement and greater the opportunity to apply design thinking to resolve it in an elegant and hopefully aesthetically or emotionally pleasing way. Such challenges are especially gratifying for the seasoned professional as they have developed almost an intuitive eye toward framing these dilemmas from which novel and efficient solutions result. Hence, over the course of years or even decades, the designer amasses a sort of micro scale, big data assemblage of prior experiences that help him or her reframe problems and construct—alone or with a team—satisfactory methodologies and practices to solve them.

Coincidentally, this process of gaining experience is exactly the idea behind machine learning and artificial intelligence. But, since computers can amass knowledge from analyzing millions of experiences and judgments it is theoretically possible that an artificial intelligence could gain this “intuitive eye” to a degree far surpassing the capacity of an individual him-or-her designer.

That is the idea behind a brash (and annoyingly self-conscious) article from the American Institute of Graphic Arts (AIGA) entitled Automation Threatens To Make Graphic Designers Obsolete. Titles like this are a hook. Of course. Designers, deep down assume that they can never be replaced. They believe this because inherent to the core of artificial intelligence there is a lack of understanding, empathy or emotional verve, so far. We saw this earlier in 2016 when an AI chatbot went Nazi because a bunch of social media hooligans realized that Tay (the name of the Microsoft chatbot) was in learn mode. If you told “her” Nazi’s were cool, she believed you. It was proof, again, that junk in is junk out.

The AIGA author Rob Peart pointed to AutoDesk’s Dreamcatcher software that is capable of rapid prototyping surprisingly creative albeit roughly detailed prototypes. Peart features a quote from an Executive Creative Director for techno-ad-agency Sapient Nitro. “A designer’s role will evolve to that of directing, selecting, and fine tuning, rather than making. The craft will be in having vision and skill in selecting initial machine-made concepts and pushing them further, rather than making from scratch. Designers will become conductors, rather than musicians.”

I like the way we always position new technology in the best possible light. “You’re not going to lose your job. Your job is just going to change.” But tell that to the people who used to write commercial music, for example. The Internet has become a vast clearing house for every possible genre of music. It’s all available for a pittance of what it would have taken a musician to write, arrange and produce a custom piece of music. It’s called stock. There are stock photographs, stock logos, stock book templates, stock music, stock house plans, and the list goes on. All of these have caused a significant disruption to old methods of commerce, and some would say that these stock versions of everything lack the kind of polish and ingenuity that used to distinguish artistic endeavors. The artist’s who’s jobs they have obliterated refer to the work with a four-letter word.

Now, I confess I have used stock photography, and stock music, but I have also used a lot of custom photography and custom music as well. Still, I can’t imagine crossing the line to a stock logo or stock publication design. Perish the thought! Why? Because they look like four-letter-words; homogenized, templates, and the world does not need more blah. It’s likely that we also introduced these new forms of stock commerce in the best possible light, as great democratizing innovations that would enable everyone to afford music, or art or design. That anyone can make, create or borrow the things that professionals used to do.

As artificial intelligence becomes better at composing music, writing blogs and creating iterative designs (which it already does and will continue to improve), we should perhaps prepare for the day when we are no longer musicians or composers but rather simply listeners and watchers.

But let’s put that in the best possible light: Think of how much time we’ll have to think deep thoughts.

 

Bookmark and Share

Invalid?

In a scene from the 2007 movie Gattaca, co-star Uma Thurman steals a follicle of hair from of love-interest Ethan Hawke and takes it to the local DNA sequencing booth (presumably they’re everywhere, like McDonald’s) to find out if Hawke’s DNA is worthy of her affections. She passes the follicle in a paper thin wrapper through a pass-through window as if she were buying a ticket for a movie. The attendant asks, “You want a full sequence?” Thurman confirms, and then waits anxiously. Meanwhile, others step up to windows to submit their samples. A woman who just kissed her boyfriend has her lips swabbed and assures the attendant that the sample is only a couple of minutes old. In about a minute, Thurman receives a plastic tube with the results rolled up inside. Behind the glass, a voice says, “Nine point three. Quite a catch!”

 

In the futuristic society depicted in the movie, humans are either “valid” or “invalid.” Though discrimination based on your genetic profile is illegal and referred to as “genoism,” it is widely known to be a distinguishing factor in employment, promotion, and finding the right soul-mate.

Enter the story of Illumina, which I discovered by way of a FastCompany article earlier this week. Illumina is a hardware/software company. One might imagine them as the folks who make the fictitious machines behind the DNA booths in a science fiction future. Except they are already making them now. The company, which few of us have ever heard of, has 5,000 employees and more than $2 billion in annual revenues. Illumina’s products are selling like hotcakes, in both the clinical and consumer spheres.

“Startups have already entered the clinical market with applications for everything from “liquid biopsy” tests to monitor late-stage cancers (an estimated $1 billion market by 2020, according to the business consulting firm Research and Markets), to non-invasive pregnancy screenings for genetic disorders like Down Syndrome ($2.4 billion by the end of 2022).”

According to FastCo,

“Illumina has captured more than 70% of the sequencing market with these machines that it sells to academics, pharmaceutical companies, biotech companies, and more.”

You and I can do this right now. Companies like Ancestry.com and 23andMe will work up a profile of your DNA from a little bit of saliva and sent through the mail. In a few weeks after submitting your sample, these companies will send you a plethora of reports on your carrier status (for passing on inherited conditions), ancestry reports that track your origins, wellness reports, such as your propensity to be fat or thin, and your traits like blue eyes or a unibrow. All of this costs about $200. Considering that sequencing DNA on this scale was a pipe dream ten years ago, it’s kind of a big deal. They don’t sequence everything; that requires one of Illumina’s more sophisticated machines and costs about $3,000.

If you put this technology in the context of my last post about exponential technological growth. Then it is easy to see that the price of machines, the speed of analysis, and the cost of a report is only going to come down, and faster than we think. At this point, everything will be arriving faster than we think. Here, if only to get your attention, I ring the bell. Illumina is investing in companies that bring this technology to your smartphone. With one company, Helix, “A customer might check how quickly they metabolize caffeine via an app developed by a nutrition company. Helix will sequence the customers’ genomic data and store it centrally, but the nutrition company delivers the report back to the user.” The team from Helix, “[…]that the number of people who have been sequenced will drastically increase […]that it will be 90% of people within 20 years.” (So, probably ten years is a better guess.)

According to the article, the frontier for genomics is expanding.

“What comes next is writing DNA, and not just reading it. Gene-editing tools like CRISPR-Cas9 are making it cheaper and faster to move genes around, which has untold consequences for changing the environment and treating disease.”

CRISPR can do a lot more than that.

But, as usual, all of these developments focus on the bright side, the side that saves lives and not the uncomfortable or unforeseen. There is the potential that you DNA will determine your insurance rates, or even if you get insurance. Toying around with these realms, it is not difficult to imagine that you can “Find anyone’s DNA,” like you can find anybody’s address or phone number. Maybe we see this feature incorporated into dating sites. You won’t have to steal a hair follicle from your date; it will already be online, and if they don’t publish it, certainly people will ask, “What do you have to hide?”

And then there’s the possibility that your offspring might inherit an unfavorable trait, like that unibrow or maybe Down Syndrome. So maybe those babies will never be born, or we’ll use CRISPER to make sure the nose is straight, the eyes are green, the skin is tan, and the IQ is way up there. CRISPER gene editing and splicing will be expensive, of course. Some will be able to afford it. The rest? Well, they’ll have to find a way to love their children flaws and all. So here are my questions? Will this make us more human or less human? Will our DNA become just another way to judge each other on how smart, or thin, or good looking, or talented? Is it just another way to distinguish between the haves and have-nots?

If the apps are already in design, Uma Thurman may not have long to wait.

 

Bookmark and Share

Did we design this future?

 

I have held this discussion before, but a recent video from FastCompany reinvigorates a provocative aspect concerning our design future, and begs the question: Is it a future we’ve designed? It centers around the smartphone. There are a lot of cool things about our smartphones like convenience, access, connectivity, and entertainment, just to name a few. It’s hard to believe that Steve Jobs introduced the very first iPhone just nine years ago on June 29, 2007. It was an amazing device, and it’s no shocker that it took off like wildfire. According to stats site Statista, “For 2016, the number of smartphone users is forecast to reach 2.08 billion.” Indeed, we can say, they are everywhere. In the world of design futures, the smartphone becomes Exhibit A of how an evolutionary design change can spawn a complex system.

Most notably, there are the billions of apps that are available to users that promise a better way to calculate tips, listen to music, sleep, drive, search, exercise, meditate, or create. Hence, there is a gigantic network of people who make their living supplying user services. These are direct benefits to society and commerce. No doubt, our devices have also often saved us countless hours of analog work, enabled us to manage our arrivals and departures, and keep in contact (however tenuous) with our friends and acquaintances. Smartphones have helped us find people in distress and help us locate persons with evil intent. But, there are also unintended consequences, like legislation to keep us from texting and driving because these actions have also taken lives. There are issues with dependency and links to sleep disorders. Some lament the deterioration of human, one-on-one, face-to-face, dialog and the distracted conversations at dinner or lunch. There are behavioral disorders, too. Since 2010 there has been a Smartphone Addiction Rating Scale (SARS) and the Young Internet Addiction Scale (YIAS). Overuse of mobile phones has prompted dozens of studies into adolescents as well as adults, and there are links to increased levels of ADHD, and a variety of psychological disorders including stress and depression.

So, while we rely on our phones for all the cool things they enable us to do we are—in less than ten years—experiencing a host of unintended consequences. One of these is privacy. Whether Apple or another brand, the intricacies of smartphone technology are substantially the same. This video shows why your phone is so easy to hack, to activate your phone’s microphone, camera, access your contact list or track your location. And, with the right tools, it is frighteningly simple. What struck me most after watching the video was not how much we are at risk of being hacked, eavesdropped on, or perniciously viewed, but the comments from a woman on the street. She said, “I don’t have anything to hide.” It is not the first millennial that I have heard say this. And that is what, perhaps, bothers me most—our adaptability based on the slow incremental erosion of what used to be our private space.

We can’t rest responsibility entirely on the smartphone. We have to include the idea of social media going back to the days of (amusingly) MySpace. Sharing yourself with a group of close friends gradually gave way to the knowledge that the photo or info may also get passed along to complete strangers. It wasn’t, perhaps your original intention, but, oh well, it’s too late now. Maybe that’s when we decided that we had better get used to sharing our space, our photos (compromising or otherwise), our preferences, our adventures and misadventures with outsiders, even if they were creeps trolling for juicy tidbits. As we chalked up that seemingly benign modification of our behavior to adaptability, the first curtain fell. If someone is going to watch me, and there’s nothing I can do about it, then I may as well get used to it. We adjusted as a defense mechanism. Paranoia was the alternative, and no one wants to think of themselves as paranoid.

A few weeks ago, I posted an image of Mark Zuckerberg’s laptop with tape over the camera and microphone. Maybe he’s more concerned with privacy since his world is full of proprietary information. But, as we become more accustomed to being both constantly connected and potentially tracked or watched, when will the next curtain fall? If design is about planning, directing or focusing, then the absence of design would be ignoring, neglecting or turning away. I return to the first question in this post: Did we design this future? If not, what did we expect?

Bookmark and Share

Defining [my] design fiction.

 

It’s tough to define something that still so new that, in practice, there is no prescribed method and dozens of interpretations. I met some designers at a recent conference in Trento, Italy that insist they invented the term in 1995, but most authorities attribute the origin to Bruce Sterling in his 2005 book, Shaping Things. The book was not about design fiction per se. Sterling’s is fond of creating neologisms, and this was one of those (like the term ‘spime’) that appeared in that book. It caught on. Sometime later Sterling sought to clarify it. And his most quoted definition is, “The deliberate use of diegetic prototypes to suspend disbelief about change.” If you rattle that off to most people, they look at you glassy-eyed. Fortunately, in 2013, Sterling went into more detail.

“Deliberate use’ means that design fiction is something that people do with a purpose. ‘Diegetic’ is from film and theatre studies. A movie has a story, but it also has all the commentary, scene-setting, props, sets and gizmos to support that story. Design fiction doesn’t tell stories — instead, it designs prototypes that imply a changed world. Suspending disbelief’ means that design fiction has an ethics. Design fictions are fakes of a theatrical sort, but they’re not wicked frauds or hoaxes intended to rob or fool people. A design fiction is a creative act that puts the viewer into a different conceptual space — for a while. Then it lets him go. Design fiction has an audience, not victims. Finally, there’s the part about ‘change’. Awareness of change is what distinguishes design fictions from jokes about technology, such as over-complex Heath Robinson machines or Japanese chindogu (‘weird tool’) objects. Design fiction attacks the status quo and suggests clear ways in which life might become different.” (Sterling, 2013)

The above definition is the one on which I base most of my research. I’ve written on this before, such as what distinguishes it from science fiction, but I bring this up today because I frequently run into things that are not design fiction but are labeled thus. There are three non-negotiables for me. We’re talking about change, a critical eye on change and suspending disbelief.

Change
Part of the intent of design fiction is to get you to think about change. Things are going to change. It implies a future. I suppose it doesn’t mean that the fiction itself has to take place in the future, however, since we can’t go back in time, the only kind of change we’re going to encounter is the future variety. So, if the intent is to make us think, that thinking should have some redeeming benefit on the present to make us better prepared for the future. Such as, “Wow. That future sucks. Let’s not let that happen.” Or, “Careful with that future scenario, it could easily go awry.” Like that.

A critical eye on change.
There are probably a lot of practitioners who would disagree with me on this point. The human race has a proclivity for messing things up. We develop things often in advance of actually thinking about what they might mean for society, or an economy, or our health, our environment, or our behavior. We design way too much stuff just because we can and because it might make us rich if we do. We need to think more before we act. It means we need to take some responsibility for what we design. Looking into the future with a critical eye on how things could go wrong or just on how wrong they might be without us noticing is a crucial element in my interpretation of intent.

Suspending disbelief
As Sterling says, the objective here is not to fool you but to get close enough to a realistic scenario that you accept that it could happen. If it’s off-the-wall, WTF, conceptual art, absent of any plausible existence, or sheer fantasy, it misses the point. I’m sure there’s a place for those and no doubt a purpose, but call it something else, but not design fiction. It’s the same reason that Star Wars is not design fiction. There’s design and there’s fiction but different intent.

I didn’t intend to have this turn into a rant, and this may all seem to you like splitting hairs, but often these subtle differences are important so that we know what were studying and why.

The nice thing about blogs is that if you have a different opinion, you can share.

 

Sterling, B., 2013. Design Fiction: “Patently Untrue” by Bruce Sterling [WWW Document]. WIRED. URL http://www.wired.co.uk/magazine/archive/2013/10/play/patently-untrue (accessed 12.12.14).
Bookmark and Share

Privacy or paranoia?

 

If you’ve been a follower of this blog for a while, then you know that I am something of a privacy wonk. I’ve written about it before (about a dozen times, such as), and I’ve even built a research project (that you can enact yourself) around it. A couple of things transpired this week to remind me that privacy is tenuous. (It could also be all the back episodes of Person of Interest that I’ve been watching lately or David Staley’s post last April about the Future of Privacy.) First, I received an email from a friend this week alerting me to a little presumption that my software is spying on me. I’m old enough to remember when you purchased software on as a set of CDs (or even disks). You loaded in on your computer, and it seemed to last for years before you needed to upgrade. Let’s face it, most of us use only a small subset of the features in our favorite applications. I remember using Photoshop 5 for quite awhile before upgrading and the same with the rest of what is now called the Adobe Creative Suite. I still use the primary features of Photoshop 5, Illustrator 10 and InDesign (ver. whatever), 90% of the time. In my opinion, the add-ons to those apps have just slowed things down, and of course, the expense has skyrocketed. Gone are the days when you could upgrade your software every couple of years. Now you have to subscribe at a clip of about $300 a year for the Adobe Creative Suite. Apparently, the old business model was not profitable enough. But then came the Adobe Creative Cloud. (Sound of an angelic chorus.) Now it takes my laptop about 8 minutes to load into the cloud and boot up my software. Plus, it stores stuff. I don’t need it to store stuff for me. I have backup drives and archive software to do that for me.

Back to the privacy discussion. My friend’s email alerted me to this little tidbit hidden inside the Creative Cloud Account Manager.

Learn elsewhere, please.
Learn elsewhere, please.

Under the Security and Privacy tab, there are a couple of options. The first is Desktop App Usage. Here, you can turn this on or off. If it’s on, one of the things it tracks is,

“Adobe feature usage information, such as menu options or buttons selected.”

That means it tracks your keystrokes. Possibly this only occurs when you are using that particular app, but uh-uh, no thanks. Switch that off. Next up is a more pernicious option; it’s called, Machine Learning. Hmm. We all know what that is and I’ver written about that before, too. Just do a search. Here, Adobe says,

“Adobe uses machine learning technologies, such as content analysis and pattern recognition, to improve our products and services. If you prefer that Adobe not analyze your files to improve our products and services, you can opt-out of machine learning at any time.”

Hey, Adobe, if you want to know how to improve your products and services, how about you ask me, or better yet, pay me to consult. A deeper dive into ‘machine learning’ tells me more. Here are a couple of quotes:

“Adobe uses machine learning technologies… For example, features such as Content-Aware Fill in Photoshop and facial recognition in Lightroom could be refined using machine learning.”

“For example, we may use pattern recognition on your photographs to identify all images of dogs and auto-tag them for you. If you select one of those photographs and indicate that it does not include a dog, we use that information to get better at identifying images of dogs.”

Facial recognition? Nope. Help me find dog pictures? Thanks, but I think I can find them myself.

I know how this works. The more data that the machine can feed on the better it becomes at learning. I would just rather Adobe get their data by searching it out for it themselves. I’m sure they’ll be okay. (Afterall there’s a few million people who never look at their account settings.) Also, keep in mind, it’s their machine not mine.

The last item on my privacy rant just validated my paranoia. I ran across this picture of Facebook billionaire Mark Zuckerberg hamming it up for his FB page.

zuckerberg copy

 

In the background, is his personal laptop. Upon closer inspection, we see that Zuck has a piece of duct tape covering his laptop cam and his dual microphones on the side. He knows.

zuckcloseup

 

Go get a piece of tape.

Bookmark and Share

Step inside The Lightstream Chronicles

Some time ago I promised to step inside one of the scenes from The Lighstream Chronicles. Today, to commemorate the debut of Season 5—that goes live today—I’m going to deliver on that promise, partially.

 

Background

The notion started after giving my students a tour of the Advanced Computing Center for Arts and Design (ACCAD)s motion-capture lab. We were a discussing VR, and sadly, despite all the recent hype, very few of us—including me—had never experienced state-of-the-art Virtual Reality. In that tour, it occurred to me that through the past five years of continuous work on my graphic novel, a story built entirely in CG, I have a trove of scenes and scenarios that I could in effect step into. Of course, it is not that simple, as I have discovered this summer working with ACCADs animation specialist Vita Berezina-Blackburn. It turns out that my extreme high-resolution images are not ideally compatible with the Oculus pipeline.

The idea was, at first, a curiosity for me, but it became quickly apparent that there was another level of synergy with my work in guerrilla futures, a flavor of design fiction.

Design fiction, my focus of study, centers on the idea that, through prototypes and future narratives we can engage people in thinking about possible futures, discuss and debate them and instill the idea of individual agency in shaping them. Unfortunately, too much design fiction ends up in the theoretical realm within the confines of the art gallery, academic conferences or workshops. The instances are few where the general public receives a future experience to contemplate and consider. Indeed, it has been something of a lament for me that my work in future fiction through the graphic novel, can be experienced as pure entertainment without acknowledging the deeper issues of its socio-techno themes. At the core of experiential design fiction introduced by Stewart Candy (2010) is the notion that future fiction can be inserted into everyday life whether the recipient has asked for them or not. The technique is one method of making the future real enough for us to ask whether this is the future we want and if not what might we do about it now.

Through my recent meanderings with VR, I see that this idea of immersive futures could be an incredibly powerful method of infusing these experiences.

The scene from Season 1 that I selected for this test.
The scene from Season 1 that I selected for this test.

 

About the video
This video is a test. We had no idea what we would get after I stripped down a scene from Season 1. Then we had a couple of weeks of trial and error re-making my files to be compatible with the system. Since one of the things that separate The Lightstream Chronicles from your average graphic novel/webcomic is the fact that you can zoom in 5x to inspect every detail, it is not uncommon, for example for me to have more than two hundred 4K textures in any given scene. It also allows me as the “director” to change it up and dolly in or out to focus on a character or object within a scene without a resulting loss in resolution. To me, it’s one of the drawbacks in many video games of getting in and inspecting a resident artifact. They usually start to “break up” into pixels the closer you get. However, in a real-time environment, you have to make concessions, at least for now, to make your textures render faster.

For this test, we didn’t apply all two hundred textures, just some essentials. For example the cordial glasses, the liquid in the bottle and the array of floating transparent files that hover over Techman’s desk. We did apply the key texture that defines the environment and that is the rusty, perforated metal wall that encloses Techman’s “safe-room” and protects it from eavesdropping. There are lots of other little glitches beyond unassigned textures, such as intersecting polygons and dozens of lighting tweaks that make this far from prime time.

In the average VR game, you move your controller forward through space while you are either seated or standing. Either way, in most cases you are stationary. What distinguishes this from most VR experiences is that I can physically walk through the scene.In this test, we were in the ACCAD motion capture lab.

Wearing the Oculus in the MoCap lab.
Wearing the Oculus in the MoCap lab while Lakshika manages the tether.

I’m sure you have seen pictures of this sort of thing before where characters strap on sensors to “capture their motions” and translate them to virtual CG characters. This was the space in which I was working. It has boundaries, however. So I had to obtain those boundaries, in scale to my scene so that I could be sure that the room and the characters were within the area of the lab. Dozens of tracking devices around the lab read sensors on the Oculus headset and ensure that once I strap it on, I can move freely within the limits of virtual space, and it would relate my movements to the context of the virtual scene.

Next week I’ll be going back into the lab with a new scene and take a look at Kristin Broulliard and Keiji in their exchange from episode 97 (page) Season 3.

Next time.
Next time.

Respond, reply, comment. Enjoy.

 

Bookmark and Share

Future Shock

 

As you no doubt have heard, Alvin Toffler died on June 27, 2016, at the age of 87. Mr. Toffler was a futurist. The book for which he is best known, Future Shock was a best seller in 1970 and was considered required college reading at the time. In essence, Mr. Toffler said that the future would be a disorienting place if we just let it happen. He said we need to pay attention.

Credit: Susan Wood/Getty Images from The New York Times 2016
Credit: Susan Wood/Getty Images from The New York Times 2016

This week, The New York Times published an article entitled Why We Need to Pick Up Alvin Toffler’s Torch by Farhad Manjoo. As Manjoo observes, at one time (the 1950s, 1960s, and 1970s), the study of foresight and forecasting was important stuff that governments and corporations took seriously. Though I’m not sure I agree with Manjoo’s assessment of why that is no longer the case, I do agree that it is no longer the case.

“In many large ways, it’s almost as if we have collectively stopped planning for the future. Instead, we all just sort of bounce along in the present, caught in the headlights of a tomorrow pushed by a few large corporations and shaped by the inescapable logic of hyper-efficiency — a future heading straight for us. It’s not just future shock; we now have future blindness.”

At one time, this was required reading.
At one time, this was required reading.

When I attended the First International Conference on Anticipation in 2015, I was pleased to discover that the blindness was not everywhere. In fact, many of the people deeply rooted in the latest innovations in science and technology, architecture, social science, medicine, and a hundred other fields are very interested in the future. They see an urgency. But most governments don’t and I fear that most corporations, even the tech giants are more interested in being first with the next zillion-dollar technology than asking if that technology is the right thing to do. Even less they are asking what repercussions might flow from these advancements and what are the ramifications of today’s decision making. We just don’t think that way.

I don’t believe that has to be the case. The World Future Society for example at their upcoming conference in Washington, DC will be addressing the idea of futures studies as a requirement for high school education. They ask,

“Isn’t it surprising that mainstream education offers so little teaching on foresight? Were you exposed to futures thinking when you were in high school or college? Are your children or grandchildren taught how decisions can be made using scenario planning, for example? Or take part in discussions about what alternative futures might look like? In a complex, uncertain world, what more might higher education do to promote a Futurist Mindset?”

It certainly needs to be part of design education, and it is one of the things I vigorously promote at my university.

As Manjoo sums up in his NYT article,

“Of course, the future doesn’t stop coming just because you stop planning for it. Technological change has only sped up since the 1990s. Notwithstanding questions about its impact on the economy, there seems no debate that advances in hardware, software and biomedicine have led to seismic changes in how most of the world lives and works — and will continue to do so.

Yet, without soliciting advice from a class of professionals charged with thinking systematically about the future, we risk rushing into tomorrow headlong, without a plan.”

And if that isn’t just crazy, at the very least it’s dangerous.

 

 

Bookmark and Share

Vision comes from looking to the future.

 

I was away last week, but I left off with a post about proving that some of the things that we current think of as sci-fi or fantasy are not only plausible, but some may even be on their way to reality. In the last post, I was providing the logical succession toward implantable technology or biohacking.

The latest is a robot toy from a company called Anki. Once again, WIRED provided the background on this product, and it is an excellent example of technological convergence which I have discussed many times before. Essentially, “technovergence” is when multiple cutting-edge technologies come together in unexpected and sometimes unpredictable ways. In this case, the toy brings together AI, machine learning, computer vision science, robotics, deep character development, facial recognition, and a few more. According to the video below,

“There have been very few applications where a robot has felt like a character that connects with humans around it. For that, you really need artificial intelligence and robotics. That’s been the missing key.”

According to David Pierce, with WIRED,

“Cozmo is a cheeky gamer; the little scamp tried to fake me into tapping my block when they didn’t match, and stormed off when I won. And it’s those little tics, the banging of its lift-like arm and spinning in circles and squawking in its Wall-E voice, that really makes you want to refer to the little guy as ‘he’ rather than ‘it.’”

What strikes me as especially interesting is that my students designed their own version of this last semester. (I’m pretty sure that they knew nothing about this particular toy.) The semester was a rigorous design fiction class that took a hard look at what was possible in the next five to ten years. For some, the class was something like hell, but the similarities and possibilities that my students put together for their robot are amazingly like Cozmo.

I think this is proof of more than what is possible; it’s evidence that vision comes from looking to the future.

Bookmark and Share

“At a certain point…”

 

A few weeks ago Brian Barrett of WIRED magazine reported on an “NEW SURVEILLANCE SYSTEM MAY LET COPS USE ALL OF THE CAMERAS.” According to the article,

“Computer scientists have created a way of letting law enforcement tap any camera that isn’t password protected so they can determine where to send help or how to respond to a crime.”

Barrett suggests that America has 30 million surveillance cameras out there. The above sentence, for me, is loaded. First of all, as with most technological advancements, they are always couched in the most benevolent form. These scientists are going to help law enforcement send help or respond to crimes. This is also the argument that the FBI used to try to force Apple to provide a backdoor to the iPhone. It was for the common good.

If you are like me, you immediately see a giant red flag waving to warn us of the gaping possibility for abuse. However, we can take heart to some extent. The sentence mentioned above also limits law enforcement access to, “any camera that isn’t password protected.” Now the question is: What percentage of the 30 million cameras are password protected? Does it include, for example, more than kennel cams or random weather cams? Does it include the local ATM, traffic, and other security cameras? The system is called CAM2.

“…CAM2 reveals the location and orientation of public network cameras, like the one outside your apartment.”

It can aggregate the cameras in a given area and allow law enforcement to access them. Hmm.

Last week I teased that some of the developments that I reserved for 25, 50 or even further into the future, through my graphic novel The Lightstream Chronicles, are showing signs of life in the next two or three years. A universal “cam” system like this is one of them; the idea of ubiquitous surveillance or the mesh only gets stronger with more cameras. Hence the idea behind my ubiquitous surveillance blog. If there is a system that can identify all of the “public network” cams, how far are we from identifying all of the “private network” cams? How long before these systems are hacked? Or, in the name of national security, how might these systems be appropriated? You may think this is the stuff of sci-fi, but it is also the stuff of design-fi, and design-fi, as I explained last week, is intended to make us think; about how these things play out.

In closing, WIRED’s Barrett raised the issue of the potential for abusing systems such as CAM2 with Gautam Hans, policy counsel at the Center for Democracy & Technology. And, of course, we got the standard response:

“It’s not the best use of our time to rail against its existence. At a certain point, we need to figure out how to use it effectively, or at least with extensive oversight.”

Unfortunately, history has shown that that certain point usually arrives after something goes egregiously wrong. Then someone asks, “How could something like this happen?”

Bookmark and Share