The Robo-Apocalypse. Part 2.

 

Last week I talked about how the South Koreans have developed a 50 caliber toting, nearly autonomous weapon system and have sold a few dozen around the world. This week I feel obligated to finish up on my promise of the drone with a pistol. I discovered this from a WIRED article. It was a little tongue-in-cheek piece that analyzed a YouTube video and concluded that pistol-packing drone is probably real. I can’t think of anyone who doesn’t believe that this is a really bad idea, including the author of the piece. Nevertheless, if we were to make a list of unintended consequences of DIY drone technology, (just some simple brainstorming) the list, after a few minutes, would be a long one.

This week FastCo reported that  NASA held a little get-together with about 1,000 invited guests from the drone industry to talk about a plan to manage the traffic when, as the agency believes, “every home will have a drone, and every home will serve as an airport at some point in the future”. NASA’s plan takes things slowly. Still the agency predicts that we will be able to get our packages from Amazon and borrow a cup of sugar from Aunt Gladys down the street, even in populated areas, by 2019.

Someone taking action is good news as we work to fix another poorly conceived technology that quickly went rogue. Unfortunately, it does nothing about the guy who wants to shoot down the Amazon drone for sport (or anyone/anything else for that matter).

On the topic of bad ideas, this week The Future Of Life Institute, a research organization out of Boston issued an open letter warning the world that autonomous weapons powered by artificial intelligence (AI) were imminent. The reasonable concern here is that a computer will do the kill-or-not-kill, bomb-or-not-bomb thinking, without the human fail-safe. Here’s an excerpt from the letter:

“Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.” [Emphasis mine.]

The letter is short. You should read it. For once we have and example of those smart people I alluded to last week, the ones with compassion and vision. For virtually every “promising” new technology—from the seemingly good to the undeniably dangerous) we need people who can foresee the unintended consequences of one-sided promises. Designers, scientists, and engineers are prime candidates to look into the future and wave these red flags. Then the rest of the world needs to pay attention.

Once again, however, the technology is here and whether it is legal or illegal, banned or not banned the cat is out of the bag. It is kind of like a nuclear explosion. Some things you just can’t take back.

Bookmark and Share

The robo-apocalypse. Part 1.

Talk of robot takeovers is all the rage right now.

I’m good with this because the evidence is out there that robots will continue to get smarter and smarter but the human condition, being what it is, we will continue to do stupid s**t. Here are some examples from the news this week.

1. The BBC reported this week that South Korea has deployed something called The Super aEgis II, a 50-caliber robotic machine gun that knows who is an enemy and who isn’t. At least that’s the plan. The company that built and sells the Super aEgis is DoDAAM. Maybe that is short for do damage. The BBC astutely notes,

“Science fiction writer Isaac Asimov’s First Law of Robotics, that ‘a robot may not injure a human being or, through inaction, allow a human being to come to harm’, looks like it will soon be broken.”

Asimov was more than a great science-fiction writer, he was a Class A futurist. He clearly saw the potential for us to create robots that were smarter and more powerful than we are. He figured there should be some rules. Asimov used the kind of foresight that responsible scientists, technologists and designers should be using for everything we create. As the article continues, Simon Parkin of the BBC quotes Yangchan Song, DoDAAM’s managing director of strategy planning.

“Automated weapons will be the future. We were right. The evolution has been quick. We’ve already moved from remote control combat devices, to what we are approaching now: smart devices that are able to make their own decisions.”

Or in the words of songwriter Donald Fagen,

“A just machine to make big decisions
Programmed by fellows with compassion and vision…1

Relax. The world is full of these fellows. Right now the weapon/robot is linked to a human who gives the OK to fire, and all customers who purchased the 30 units thus far have opted for the human/robot interface. But the company admits,

“If someone came to us wanting a turret that did not have the current safeguards we would, of course, advise them otherwise, and highlight the potential issues,” says Park. “But they will ultimately decide what they want. And we develop to customer specification.”

A 50 caliber round. Heavy damage.

A 50 caliber round. Heavy damage.

They are currently working on the technology that will help their machine make the right decision on its own., but the article cites several academics and researchers who see red flags waving. Most concur that teaching a robot right from wrong is no easy task. Compound the complexity because the fellows who are doing the programming don’t always agree on these issues.

Last week I wrote about Google’s self-driving car. Of course, this robot has to make tough decisions too. It may one day have to decide whether to hit the suddenly appearing baby carriage, the kid on the bike, or just crash the vehicle. In fact, Parkin’s article brings Google into the picture as well, quoting Colin Allen,

“Google admits that one of the hardest problems for their programming is how an automated car should behave at a four-way stop sign…”

Humans don’t do such a good job at that either. And there is my problem with all of this. If the humans who are programming these machines are still wrestling with what is ethically right or wrong, can a robot be expected to do better. Some think so. Over at DoDAMM,

“Ultimately, we would probably like a machine with a very sound basis to be able to learn for itself, and maybe even exceed our abilities to reason morally.”

Based on what?

Next week: Drones with pistols.

 

1. Donald Fagen, IGY From the Night Fly album. 1982
Bookmark and Share

Promises. Promises.

Throughout the course of the week, usually on a daily basis, I collect articles, news blurbs and what I call “signs from the future.” Mostly they fall into categories such as design fiction, technology, society, future, theology, and philosophy. I use this content sometimes for this blog, possibly for a lecture but most often for additional research as part of scholarly papers and presentations that are a matter of course as a professor. I have to weigh what goes into the blog because most of these topics could easily become full-blown papers.  Of course, the thing with scholarly writing is that most publications demand exclusivity on publishing your ideas. Essentially, that means that it becomes difficult to repurpose anything I write here for something with more gravitas.  One of the subjects that are of growing interest to me is Google. Not the search engine, per se, rather the technological mega-corp. It has the potential to be just such a paper, so even though there is a lot to say, I’m going to land on only a few key points.

A ubiquitous giant in the world of the Internet, Google has some of the most powerful algorithms, stores your most personal information, and is working on many of the most advanced technologies in the world. They try very hard to be soft-spoken, and low-key, but it belies their enormous power.

Most of us would agree that technology has provided some marvelous benefits to society especially in the realms of medicine, safety, education and other socially beneficial applications. Things like artificial knees, cochlear implants, air bags (when they don’t accidentally kill you), and instantaneous access to the world’s libraries have made life-changing improvements. Needless to say, especially if you have read my blog for any amount of time, technology also can have a downside. We may see greater yields from our agricultural efforts, but technological advancements also pump needless hormones into the populace, create sketchy GMO foodstuffs and manipulate farmers into planting them. We all know the problems associated with automobile emissions, atomic energy, chemotherapy and texting while driving. These problems are the obvious stuff. What is perhaps more sinister are the technologies we adopt that work quietly in the background to change us. Most of them we are unaware of until, one day, we are almost surprised to see how we have changed, and maybe we don’t like it. Google strikes me as a potential contributor in this latter arena. A recent article from The Guardian, entitled “Where is Google Taking Us?” looks at some of their most altruistic technologies (the ones they allowed the author to see). The author, Tim Adams, brought forward some interesting quotes from key players at Google. When discussing how Google would spend some $62 million in cash that it had amassed, Larry Page, one of the company’s co-founders asked,

“How do we use all these resources… and have a much more positive impact on the world?”

There’s nothing wrong with that question. It’s the kind of question that you would want a billionaire asking. My question is, “What does positive mean, and who decides what is and what isn’t?” In this case, it’s Google. The next quote comes from Sundar Pichai. With so many possibilities that this kind of wealth affords, Adams asked how they stay focused on what to do next.

“’Focus on the user and all else follows…We call it the toothbrush test,’ Pichai says, ‘we want to concentrate our efforts on things that billions of people use on a daily basis.’”

The statement sounds like savvy marketing. He is also talking about the most innate aspects of our everyday behavior. And so that I don’t turn this into an academic paper, here is one more quote. This time the author is talking to Dmitri Dolgov, principal engineer for Google Self-Driving Cars. For the whole idea to work, that is, the car reacting like a human would, only better, it has to think.

“Our maps have information stored and as the car drives around it builds up another local map with its sensors and aligns one to the other – that gives us a location accuracy of a centimetre or two. Beyond that, we are making huge numbers of probabilistic calculations every second.”

Mapping everything down to the centimeter.

Mapping everything down to the centimeter.

It’s the last line that we might want to ponder. Predictive algorithms are what artificial intelligence is all about, the kind of technology that plugs-in to a whole host of different applications from predicting your behavior to your abilities. If we don’t want to have to remember to check the oil, there is a light that reminds us. If we don’t want to have to remember somebody’s name, there is a facial recognition algorithm to remember for us. If my wearable detects that I am stressed, it can remind me to take a deep breath. If I am out for a walk, maybe something should mention all the things I could buy while I’m out (as well as what I am out of). 

Here’s what I think about. It seems to me that we are amassing two lists: the things we don’t want to think about, and the things we do. Folks like Google are adding things to Column A, and it seems to be getting longer all the time. My concern is whether we will have anything left in Column B.

 

Bookmark and Share

What games will we change next? Is your game one of them?

A few of weeks ago I was blogging about open sourcing, collaboration and how all these tectonic shifts change industries, professions and more. Then, there was a recent blog on the future of work and how even white collar jobs, the ones that everyone thought were bullet-proof, are targets for dissolution by artificial intelligence (AI). Could designers be targets as well? That was the subject of the first post I mentioned.

So a couple of things crossed my glance that could have a bearing on the design, manufacturing, collaboration, and what we think of as the traditional maker-economy. What bearing will it have? Who knows? I think we should keep an eye on them.

1. Not long ago, WIRED had an article and video on how a “regular guy” without any special “making” skills was able to fabricate, in his home, a fully functioning and untraceable AR15 automatic rifle. Of course, doing so is illegal, so after testing his weapon on the firing range he turned over the parts to the local police. It was all in the name of journalism. Check it out.

The fully assembled AR-15.  Photo: JOSH VALCARCEL/WIRED

The fully assembled AR-15. Photo: JOSH VALCARCEL/WIRED

2. I read an article in Harvard Business Review about the very interesting trend in business toward Network Orchestrators. According to HBR, “These companies create a network of peers in which the participants interact and share in the value creation. They may sell products or services, build relationships, share advice, give reviews, collaborate, co-create and more. Examples include eBay, Red Hat, and Visa, Uber, Tripadvisor, and Alibaba.”

What do these things have in common? They are potential game-changers. Heck, they’ve already changed the game. The bigger question for us is what game will they change next. Both models challenge traditional “expertise”. The expert, the specialist, the factory, the tradesman (and in some cases the authorities) are getting edged out. It has applications to just about everything including medicine and the work that we believed could only be done by the specialist. So what should we do about it? I don’t have the answer, but I can tell you this: We should be thinking about it.

Just sayin’.

 

Bookmark and Share

On better humans and bad bots.

News of breaking future technologies, the stuff at the crux of my research, accumulates as a daily occurrence, and this week is no different. Of note, Zoltan Istvan is (another) 2016 US presidential candidate, but this time for the Transhumanism party. Transhumanism, “(abbreviated as H+ or h+) is an international cultural and intellectual movement with an eventual goal of fundamentally transforming the human condition by developing and making widely available technologies to greatly enhance human intellectual, physical, and psychological capacities”. 1 For those of you who didn’t know. Living forever is job one for the “movement.” Mr. Istvan is not likely to be in the debates but you can follow him and the rest of H+ at humanity+. org. I’ll reserve comment on this.

On another front, for those who think that once we get this human thing down right, that technology will save us and mischief will cease, there is this item from WIRED magazine UK. A couple of researchers at Google (that’s OK you can trust them) have “created an artificial intelligence that developed its responses based on transcripts from an IT helpdesk chat service and a database of movie scripts.” This AI is called a chatbot. Chatbot are computer programs designed to talk to you. You can try one out here.

According to WIRED’s James Temperton,

“The [Google] system used a neural network — a collection of machines arranged to mimic a human brain — to create its unnervingly human responses. Such networks use a huge amount of computing power and Google’s latest research shows their potential for improving how machines hold conversations.”

Apparently, it was the addition of the movie scripts that made the bot more contentious. Consider this exchange on morality:

“Human: I really like our discussion on morality and ethics …

Machine: And how I’m not in the mood for a philosophical debate.

Human: What do you like to talk about?

Machine: Nothing.”

Fun with programming. All of this points to the old adage, “Junk in is junk out.” In The Lightstream Chronicles the future version of this mischief is called twisting. Basically you take a perfectly good, well-behaved, synthetic human and put in some junk. The change in programming is generally used to make these otherwise helpful synths do criminal things.

The logo says it all.

This tendency we have as human beings to twist good ideas into bad ones is nothing new, and today’s headlines are evidence of it. We print guns with 3D printers, we use drones to vandalize, cameras to spy, and computers to hack. Perhaps that is what Humanity+ has in mind: Make humanity more technologically advanced. More like a… machine, then reprogram the humanness (that just leads to no good) out. What could possibly go wrong with that?

 

1 https://en.wikipedia.org/wiki/Transhumanism
Bookmark and Share

Breathing? There’s an app for that.

As the Internet of Things (IoT) and Ubiquitous Computing (UbiComp) continue to advance there really is no more room left for surprise. These things are cascading out of Silicon Valley, crowd-funding sites, labs, and start-ups with continually accelerating speed. And like Kurzweil, I think it’s happening faster than 95 percent of the world is expecting. A fair number of these are duds and frankly superfluous attempts at “computing” what otherwise, with a little mental effort, we could do on our own. Ian Bogost’s article, this week in the Atlantic Monthly,The Internet of Things You Don’t Really Need points out how many of these “innovations” are replacing just the slightest amount of extra brain power, ever-so-minimal physical activity, or prescient concentration. Not to mention that these apps just supply another entry into your personal, digital footprint. More in the week’s news (this stuff is happening everywhere) this time in FastCompany, an MIT alumn who is concerned about how little “face time” her kids are getting with real humans because they are constantly in front of screens or tablets. (Human to human interaction is important for development of emotional intelligence.) The solution? If you think it is less time on the tablet and more “go out and play”, you are behind the times. The researcher, Rana el Kaliouby, has decided that she has the answer:

“Instead, she believes we should be working to make computers more emotionally intelligent. In 2009, she cofounded a company called Affectiva, just outside Boston, where scientists create tools that allow computers to read faces, precisely connecting each brow furrow or smile line to a specific emotion.”

Of course it is. Now, what we don’t know, don’t want to learn (by doing), or just don’t want to think about, our computer, or app, will do for us. The FastCo author Elizabeth Segran, interviewed el Kaliouby:

“The technology is able to deduce emotions that we might not even be able to articulate, because we are not fully aware of them,” El Kaliouby tells me. “When a viewer sees a funny video, for instance, the Affdex might register a split second of confusion or disgust before the viewer smiles or laughs, indicating that there was actually something disturbing to them in the video.”

Oh my.

“At some point in the future, El Kaliouby suggests fridges might be equipped to sense when we are depressed in order to prevent us from binging on chocolate ice cream. Or perhaps computers could recognize when we are having a bad day, and offer a word of empathy—or a heartwarming panda video.”

Please no.

By the way, this is exactly the type of technology that is at the heart of the mesh, the ubiquitous surveillance system in The Lightstream Chronicles. In addition to having learned every possible variation of human emotion, this software has also learned physical behavior such that it can tell when, or if someone is about to shoplift, attack, or threaten another person. It can even tell if you have any business being where you are or not.

So,  before we get swept up in all of the heartwarming possibilities for relating to our computers, (shades of Her), and just in case anyone is left who is alarmed at becoming a complete emotional, intellectual and physical muffin, there is significant new research that suggests that the mind is a muscle. You use it or lose it, that you can strengthen learning and intelligence by exercising and challenging your mind and cognitive skills. If my app is going remind me not to be rude, when to brush my teeth, drink water, stop eating, and go to the toilet, what’s left? The definition of post-human comes to mind.

As a designer, I see warning flags. It is precisely a designer’s ability for abstract reasoning that makes problem solving both gratifying and effective. Remember McGyver? You don’t have to, your life hacks app will tell you what you need to do. You might also want to revisit a previous blog on computers that are taking our jobs away.

macgyver

McGyver. If you don’t know, you’re going to have to look it up.

Yet, it would seem that many people think that the only really important human trait is happiness, that ill-defined, elusive, and completely arbitrary emotion. As long as we retain that, all those other human traits we should evolve out of anyway.

What do you think?

Bookmark and Share

Is it a human right to have everything you want? 

The CBC recently published an article online about a new breakthrough in vision improvement that could provide patients with 20/20 vision x3. Like cataract surgery today that removes an old yellowed lens from the eye and replaces it with a new, plastic optometric-correct lens, the inventor, an optometrist from British Columbia, says the 8-minute procedure will give recipients better than 20/20 vision for the rest of their lives no matter how old they are.

bionic-lens-20150518

Better than 20/20. Maybe it starts here.

As soon as clinical trials are complete and the regulatory hurdles are leapt the articles says the implant could be available in as soon as 2 years. Let me be the first to say, “Sign me up!” I’ve had glasses for 20 years and just recently made the move to contacts. Both are a hassle and the improvement is anything but consistent. Neither solution provides 24-hour correction nor you’re lucky to get 20/20. So, rationally speaking, it’s a major improvement in vision, convenience and probably your safety. On top of that, the CBC article concludes noting the inventor/optometrist has set up a foundation,

“…Which would donate money to organizations providing eye surgery in developing countries to improve people’s quality of life.

“Perfect eyesight should be a human right,” he says.”

Now I hate to break the poignancy of this moment, but it’s my job. Should perfect eyesight be a human right? How about perfect hearing, ideal bodyweight, genius IQ, super longevity, cranial Internet access, freedom from disease, illness and perfect health? It’s hard to deny that any of these are good. If you follow my graphic novel, The Lightstream Chronicles, you know that society has indeed opted for all of it and more: enhanced mood control, faster learning, better sex, deeper sleep, freedom from anxiety, stress, worry, bad memories, and making stupid comments. They are all human rights, right? Or is it just human nature to have unlimited expectations and demand instant gratification? It begins with one implant (not unlike the first nip or tuck or a new tattoo) and then becomes an endless litany of new and improved. But if you posit the argument that these enhancements are desirable, then you are also acknowledging that the current state of humanness is not. Are our shortcomings, disappointments, pain, testing, and struggles to be jettisoned forever? Once we can control everything about ourselves that we don’t like, where will we stop? Will we be happier? Or will there alway be that extra thing that we simply must  have. Perhaps this is the real definition of human nature: never satisfied.

G.K. Chesterton said, “Meaninglessness does not come from being weary of pain. Meaninglessness comes from being weary of pleasure.”

As I have written about before, all of this is just small section of a greater organism that is growing in technology. So as complicated as the whole idea of human augmentation is to think about, it’s actually much more complicated. While we cobble together new additions on the old house, there are technologies such as artificial intelligence (AI) that will surpass our shortcomings better than our replacement part enhancements. If you haven’t read Kurzweil’s Law of Accelerating Returns, you should. We are rapidly approaching a time when the impossible will be possible and we will be staring at it slack-jawed and asking how we got here and why? It can paint a dismal picture, but it is a picture we should look at and study. These are the questions of our time.

And so, I create fictional scenarios, firmly convinced that the more disturbing and visceral this picture the more we will take notice and ask questions before blithely moving forward. This is where I see the heart of design fiction, speculative futures, and—I think the more powerful—experiential interventions. It will be something to talk about in a future blog.

Bookmark and Share

Robots will be able to do almost anything, including what you do.

There seems to be a lot of talk these days about what our working future may look like. A few weeks ago I wrote about some of Faith Popcorn’s predictions. Quoting from the Popcorn slide deck,

“Work, as we know it, is dying. Careers and offices: Over. The robots are marching in, taking more and more skilled jobs. To keep humans from becoming totally obsolete, the government must intervene, incentivizing companies to keep people on the payroll. Otherwise, robots would job-eliminate them. For the class of highly-trained elite works, however, things have never been better. Maneuvering from project to project, these free-agents thrive. Employers, eager to secure their human talent, lavish them with luxurious benefits and unprecedented flexibility.  The gap between the Have’s and Have-Nots has never been wider.”

Now, I consider Popcorn to be a marketing futurist, she’s in the business to help brands. There’s nothing wrong with that, and I agree with almost all of her predictions. But she’s not the only one talking about the future of work. In a recent New York Times Sunday Book Review (forwarded to me by a colleague) Rise Of The  Robots | Technology and the Threat of a Jobless Future, Martin Ford pretty much agrees. According to the review, “Tasks that would seem to require a distinctively human capacity for nuance are increasingly assigned to algorithms, like the ones currently being introduced to grade essays on college exams.” Increasingly devices, like 3D printers or drones can do work that used to require a full-blown manufacturing plant or what was heretofore simply impossible. Ford’s book goes on to chronicle dozens of instances like this. The reviewer, Barbara Ehrenreich, states, “In ‘Rise of the Robots,’ Ford argues that a society based on luxury consumption by a tiny elite is not economically viable. More to the point, it is not biologically viable. Humans, unlike robots, need food, health care and the sense of usefulness often supplied by jobs or other forms of work.”

In another article in Fast Company, Gwen Moran surveys a couple of PhD’s, one from MIT and another who’s executive director of the Society of Human Resource Management. The latter, Mark Schmit agrees that there will be a disparity in the work force. “this winner/loser scenario predicts a widening wealth gap, Schmit says. Workers will need to engage in lifelong education to remain on top of how job and career trends are shifting to remain viable in an ever-changing workplace, he says.” On the other end of the spectrum some see the future as more promising. The aforementioned MIT prof, Erik Brynjolfsson, “…thinks that technology has the potential for “shared prosperity,” giving us richer lives with more leisure time and freedom to do the types of work we like to do. But that’s going to require collaboration and a unified effort among developers, workers, governments, and other stakeholders…Machines could carry out tasks while programmed intelligence could act as our “digital agents” in the creation and sharing of products and knowledge.”

I’ve been re-accessing Stuart Candy’s PhD dissertation The Futures of Everyday Life, recently and he surfaces a great quote from science fiction writer Warren Ellis which itself was surfaced through Bruce Sterling’s State of the World Address at SXSW in 2006. It is,

“[T]here’s a middle distance between the complete collapse of infrastructure and some weird geek dream of electronically knowing where all your stuff is. Between apocalyptic politics and Nerd-vana, is the human dimension. How this stuff is taken on board, by smart people, at street level. … That’s where the story lies… in this spread of possible futures, and the people, on the ground, facing them. The story has to be about people trying to steer, or condemn other people, toward one future or another, using everything in their power. That’s a big story. “1

This is relevant for design, too, the topic of last week’s blog. It all ties into the future of the future, the stuff I research and blog about.  It’s about speculation and design fiction and other things on the fringes of our thinking. The problem is that I don’t think that enough people are thinking about it. I think it is still too fringe. What do people do after they read Mark Ford? Does it change anything? In a moment of original thinking I penned the following thought, and as is usually the case subsequently heard it stated in other words by other researchers:

If we could visit the future ”in person,” how would it affect us upon our return? How vigorously would we engage our redefined present?

It is why we need more design fiction and the kind that shakes us up in the process.

Comments welcome.

1 http://www.warrenellis.com

Bookmark and Share

Design fiction for designers. Beware of hand grenades.

When the discussion shifts to design futures, as it sometimes does in my line of work, we often think of future artifacts or future scenarios about what we might make in the future as designers. There is, however, another possible discussion to be had, though it probably occurs much less frequently. That would be the future of design, the profession itself. Since my area of research is speculative futures and design fiction, it is worthy to explore the idea that design, as we know it may radically change, that the distinctive categorization between industrial designer, communications designer and interior designer may, in time, be no longer fully relevant. In this speculation, the title of “designer” or “design thinker” might someday supplant those distinctions. This might be something of a hand grenade in the design academy, but then that is what design fictions are all about: provocation. It is provocative because most design academies, while vehement supporters of collaboration are nevertheless set up to teach the disciplines separately. Take the analogy of a hinge on a pair of sunglasses. A “designer” could design the glasses without knowing precisely how the hinge would work. The common response is that, “Yes, but someone has to know how to design the hinge, hence the need for the specialist.” Of course, this is precisely why the concept of collaboration is so important in design education, just as it is in design practice. We have long preached that lone designer is dead. But that is today, and this is a speculative future. It assumes things will change. The reason we create speculative futures is so we can contemplate these changes and then we can work to shape the future rather than be surprised by it.

Part of making a design fiction is the idea of logical succession. Auger (2013) concurs: “…it is possible to craft the speculation into something more poignant, based on logical iterations of an emerging technology and tailored to the complex and subtle requirements of an identified audience.” If we don’t have this logical thread, or Auger’s “perceptual bridge” then we risk losing our audience. Here I run the risk of scaring away my audience because the idea of design becoming homogenous can be threatening, especially in the academy, so it is important that I quickly weave the logical thread. The grenade is, in fact, more logical than not, and it centers on the term collaboration. I will agree that someone, somewhere must design the hinge, or bear the expertise required to select the ideal substrate, or understand the essence of narrative storytelling, but not unlike the dead lone designer may be the expertise of the lone designer. Take for example the whole notion of open source. Essentially, open source is the deep thinking and expertise (of people like designers and engineers), that is given away free to be modified or built upon at will. With the onslaught of self-accelerated technologies (apps being the most primitive, nano machines being at the other end of the spectrum) that can take basic deep thinking and examine a dozen different permutations in a few minutes or seconds, the idea that we need to keep graduating thousands of new specialized deep thinkers seems almost naive. Is the hinge app, the universal specifier, or the visual collector that far off? How long would it take the sunglass designer to find a suitable hinge via the Internet or some already existing hinge database? If that doesn’t currently exist, come back tomorrow. In this near future scenario, our specialties will be less valuable.

My goal, once again, is that we begin to contemplate these things now rather than later. Design collaboration is very much about identifying what you don’t know and then knowing how to get it. Managing knowledge first, and then managing your subcontractors has always been at the heart of realizing great design; a building, a space, a device, a visual medium, even wicked problem-solving. It tells us who else we need at the table—especially the users themselves. Some of us have difficulty imagining a future where we have to change, but this is precisely why we create future narratives so that we can contemplate them. Within this story, the new collaboration will evolve from the physical team to the virtual team and that virtual team will include artificial as well as real-world intelligence. The new “designer” may well be the one who is most adept at navigating the crowd-sourced, open sourced, self-accelerated future: A design thinker with a degree in design thought. The only thing missing here is the some characters, some worldbuilding and a visualization of this fiction. I guess I will have put that on my to-do list.

 

James Auger (2013) Speculative design: crafting the speculation, Digital Creativity, 24:1, 11-35, DOI: 10.1080/14626268.2013.767276
Bookmark and Share

Falling asleep in the future. 

Prologues to Season 4 : : The Lighstream Chronicles : : Dreamstate

Season 4 Prologue ix-x: Backstory

Every now an then it makes sense to keep readers updated on the scientific and technological developments that were both behavioral and cultural influences in the 22nd century. This addendum could to the 2159backstory link on The LIghtstream Chronicles site.

It wasn’t until 2047 that technological manipulation of the body’s endocrine system became commonplace. Prior to that, pharmaceuticals were the primary mode of stimulating hormone production in the body, but that solution never seemed to alleviate the side effects that so often accompanied pharma-based protocols. Nevertheless, it was the the well funded pharmaceutical industry, perhaps seeing the writing-on-the-wall that helped to pioneer the chips that ultimately became the regulators that enabled precision balance of the body’s chemistry.

Implanting chips into the body was in full swing by the late 2020’s but and this often meant that the body required numerous implants to balance and regulate different processes. The chemchip as it was called in 2047 was the first handle multiple functions. Chip#5061189 (the original first device was about the size of a postage stamp and was inserted below the skin in the lumbar region of the back. From here, it was able to trigger or inhibit the adrenal glands, hypothalamus, ovaries, pancreas, parathyroid, pineal gland, pituitary gland, testes, thymus and thyroid. Programs were written and updated seamlessly to coincide with various life stages and individual preferences. These early implants had a significant affect on overall health and wellness.

Gradually however, these chips required maintenance and did not work in synergy with other chipsets that were becoming prevalent throughout the body. A series of technological developments over the next 12 to 15 years began to consolidate individual chip functions into what became known as the chipset. You can read more about how the chipset works.

 

dreamstate

Just relax, we’ll take it from here.

Of course, technology marches on, so by the 22nd century the augmented human is an extremely sophisticated combination of technology builds on a “natural” human elements. Hence, we have the sleep program. This can be anything the user wants it to be from floating weightless in an imagined, liquid, greenspace to a field of tall grass. Then, regulation of the the body chemistry can manipulate the body chemistry and trick the body into thinking it has had 8 hours of sleep in only 3. Think of how much more work you could get done.

Bookmark and Share
Return top

About the Envisionist

Scott Denison is an accomplished visual, brand, interior, and set designer. He is currently Assistant Professor of Design Foundations at The Ohio State University. He continues his research in epic design that examines the design-culture relationship within a future narrative — a graphic novel / web comic. The web comic posts weekly updates at: http://thelightstreamchronicles.com. Artist's commentary is also posted here in conjunction with each new comic page. The author's professional portfolio can be found at: http://scottdenison.com There is also a cyberpunk tumblr site at: http://lghtstrm.tumblr.com
Comic Blog Elite