Tag Archives: crime thriller

The Robo-Apocalypse. Part 2.

 

Last week I talked about how the South Koreans have developed a 50 caliber toting, nearly autonomous weapon system and have sold a few dozen around the world. This week I feel obligated to finish up on my promise of the drone with a pistol. I discovered this from a WIRED article. It was a little tongue-in-cheek piece that analyzed a YouTube video and concluded that pistol-packing drone is probably real. I can’t think of anyone who doesn’t believe that this is a really bad idea, including the author of the piece. Nevertheless, if we were to make a list of unintended consequences of DIY drone technology, (just some simple brainstorming) the list, after a few minutes, would be a long one.

This week FastCo reported that  NASA held a little get-together with about 1,000 invited guests from the drone industry to talk about a plan to manage the traffic when, as the agency believes, “every home will have a drone, and every home will serve as an airport at some point in the future”. NASA’s plan takes things slowly. Still the agency predicts that we will be able to get our packages from Amazon and borrow a cup of sugar from Aunt Gladys down the street, even in populated areas, by 2019.

Someone taking action is good news as we work to fix another poorly conceived technology that quickly went rogue. Unfortunately, it does nothing about the guy who wants to shoot down the Amazon drone for sport (or anyone/anything else for that matter).

On the topic of bad ideas, this week The Future Of Life Institute, a research organization out of Boston issued an open letter warning the world that autonomous weapons powered by artificial intelligence (AI) were imminent. The reasonable concern here is that a computer will do the kill-or-not-kill, bomb-or-not-bomb thinking, without the human fail-safe. Here’s an excerpt from the letter:

“Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.” [Emphasis mine.]

The letter is short. You should read it. For once we have and example of those smart people I alluded to last week, the ones with compassion and vision. For virtually every “promising” new technology—from the seemingly good to the undeniably dangerous) we need people who can foresee the unintended consequences of one-sided promises. Designers, scientists, and engineers are prime candidates to look into the future and wave these red flags. Then the rest of the world needs to pay attention.

Once again, however, the technology is here and whether it is legal or illegal, banned or not banned the cat is out of the bag. It is kind of like a nuclear explosion. Some things you just can’t take back.

Bookmark and Share

The robo-apocalypse. Part 1.

Talk of robot takeovers is all the rage right now.

I’m good with this because the evidence is out there that robots will continue to get smarter and smarter but the human condition, being what it is, we will continue to do stupid s**t. Here are some examples from the news this week.

1. The BBC reported this week that South Korea has deployed something called The Super aEgis II, a 50-caliber robotic machine gun that knows who is an enemy and who isn’t. At least that’s the plan. The company that built and sells the Super aEgis is DoDAAM. Maybe that is short for do damage. The BBC astutely notes,

“Science fiction writer Isaac Asimov’s First Law of Robotics, that ‘a robot may not injure a human being or, through inaction, allow a human being to come to harm’, looks like it will soon be broken.”

Asimov was more than a great science-fiction writer, he was a Class A futurist. He clearly saw the potential for us to create robots that were smarter and more powerful than we are. He figured there should be some rules. Asimov used the kind of foresight that responsible scientists, technologists and designers should be using for everything we create. As the article continues, Simon Parkin of the BBC quotes Yangchan Song, DoDAAM’s managing director of strategy planning.

“Automated weapons will be the future. We were right. The evolution has been quick. We’ve already moved from remote control combat devices, to what we are approaching now: smart devices that are able to make their own decisions.”

Or in the words of songwriter Donald Fagen,

“A just machine to make big decisions
Programmed by fellows with compassion and vision…1

Relax. The world is full of these fellows. Right now the weapon/robot is linked to a human who gives the OK to fire, and all customers who purchased the 30 units thus far have opted for the human/robot interface. But the company admits,

“If someone came to us wanting a turret that did not have the current safeguards we would, of course, advise them otherwise, and highlight the potential issues,” says Park. “But they will ultimately decide what they want. And we develop to customer specification.”

A 50 caliber round. Heavy damage.
A 50 caliber round. Heavy damage.

They are currently working on the technology that will help their machine make the right decision on its own., but the article cites several academics and researchers who see red flags waving. Most concur that teaching a robot right from wrong is no easy task. Compound the complexity because the fellows who are doing the programming don’t always agree on these issues.

Last week I wrote about Google’s self-driving car. Of course, this robot has to make tough decisions too. It may one day have to decide whether to hit the suddenly appearing baby carriage, the kid on the bike, or just crash the vehicle. In fact, Parkin’s article brings Google into the picture as well, quoting Colin Allen,

“Google admits that one of the hardest problems for their programming is how an automated car should behave at a four-way stop sign…”

Humans don’t do such a good job at that either. And there is my problem with all of this. If the humans who are programming these machines are still wrestling with what is ethically right or wrong, can a robot be expected to do better. Some think so. Over at DoDAMM,

“Ultimately, we would probably like a machine with a very sound basis to be able to learn for itself, and maybe even exceed our abilities to reason morally.”

Based on what?

Next week: Drones with pistols.

 

1. Donald Fagen, IGY From the Night Fly album. 1982
Bookmark and Share

Promises. Promises.

Throughout the course of the week, usually on a daily basis, I collect articles, news blurbs and what I call “signs from the future.” Mostly they fall into categories such as design fiction, technology, society, future, theology, and philosophy. I use this content sometimes for this blog, possibly for a lecture but most often for additional research as part of scholarly papers and presentations that are a matter of course as a professor. I have to weigh what goes into the blog because most of these topics could easily become full-blown papers.  Of course, the thing with scholarly writing is that most publications demand exclusivity on publishing your ideas. Essentially, that means that it becomes difficult to repurpose anything I write here for something with more gravitas.  One of the subjects that are of growing interest to me is Google. Not the search engine, per se, rather the technological mega-corp. It has the potential to be just such a paper, so even though there is a lot to say, I’m going to land on only a few key points.

A ubiquitous giant in the world of the Internet, Google has some of the most powerful algorithms, stores your most personal information, and is working on many of the most advanced technologies in the world. They try very hard to be soft-spoken, and low-key, but it belies their enormous power.

Most of us would agree that technology has provided some marvelous benefits to society especially in the realms of medicine, safety, education and other socially beneficial applications. Things like artificial knees, cochlear implants, air bags (when they don’t accidentally kill you), and instantaneous access to the world’s libraries have made life-changing improvements. Needless to say, especially if you have read my blog for any amount of time, technology also can have a downside. We may see greater yields from our agricultural efforts, but technological advancements also pump needless hormones into the populace, create sketchy GMO foodstuffs and manipulate farmers into planting them. We all know the problems associated with automobile emissions, atomic energy, chemotherapy and texting while driving. These problems are the obvious stuff. What is perhaps more sinister are the technologies we adopt that work quietly in the background to change us. Most of them we are unaware of until, one day, we are almost surprised to see how we have changed, and maybe we don’t like it. Google strikes me as a potential contributor in this latter arena. A recent article from The Guardian, entitled “Where is Google Taking Us?” looks at some of their most altruistic technologies (the ones they allowed the author to see). The author, Tim Adams, brought forward some interesting quotes from key players at Google. When discussing how Google would spend some $62 million in cash that it had amassed, Larry Page, one of the company’s co-founders asked,

“How do we use all these resources… and have a much more positive impact on the world?”

There’s nothing wrong with that question. It’s the kind of question that you would want a billionaire asking. My question is, “What does positive mean, and who decides what is and what isn’t?” In this case, it’s Google. The next quote comes from Sundar Pichai. With so many possibilities that this kind of wealth affords, Adams asked how they stay focused on what to do next.

“’Focus on the user and all else follows…We call it the toothbrush test,’ Pichai says, ‘we want to concentrate our efforts on things that billions of people use on a daily basis.’”

The statement sounds like savvy marketing. He is also talking about the most innate aspects of our everyday behavior. And so that I don’t turn this into an academic paper, here is one more quote. This time the author is talking to Dmitri Dolgov, principal engineer for Google Self-Driving Cars. For the whole idea to work, that is, the car reacting like a human would, only better, it has to think.

“Our maps have information stored and as the car drives around it builds up another local map with its sensors and aligns one to the other – that gives us a location accuracy of a centimetre or two. Beyond that, we are making huge numbers of probabilistic calculations every second.”

Mapping everything down to the centimeter.
Mapping everything down to the centimeter.

It’s the last line that we might want to ponder. Predictive algorithms are what artificial intelligence is all about, the kind of technology that plugs-in to a whole host of different applications from predicting your behavior to your abilities. If we don’t want to have to remember to check the oil, there is a light that reminds us. If we don’t want to have to remember somebody’s name, there is a facial recognition algorithm to remember for us. If my wearable detects that I am stressed, it can remind me to take a deep breath. If I am out for a walk, maybe something should mention all the things I could buy while I’m out (as well as what I am out of). 

Here’s what I think about. It seems to me that we are amassing two lists: the things we don’t want to think about, and the things we do. Folks like Google are adding things to Column A, and it seems to be getting longer all the time. My concern is whether we will have anything left in Column B.

 

Bookmark and Share

On better humans and bad bots.

News of breaking future technologies, the stuff at the crux of my research, accumulates as a daily occurrence, and this week is no different. Of note, Zoltan Istvan is (another) 2016 US presidential candidate, but this time for the Transhumanism party. Transhumanism, “(abbreviated as H+ or h+) is an international cultural and intellectual movement with an eventual goal of fundamentally transforming the human condition by developing and making widely available technologies to greatly enhance human intellectual, physical, and psychological capacities”. 1 For those of you who didn’t know. Living forever is job one for the “movement.” Mr. Istvan is not likely to be in the debates but you can follow him and the rest of H+ at humanity+. org. I’ll reserve comment on this.

On another front, for those who think that once we get this human thing down right, that technology will save us and mischief will cease, there is this item from WIRED magazine UK. A couple of researchers at Google (that’s OK you can trust them) have “created an artificial intelligence that developed its responses based on transcripts from an IT helpdesk chat service and a database of movie scripts.” This AI is called a chatbot. Chatbot are computer programs designed to talk to you. You can try one out here.

According to WIRED’s James Temperton,

“The [Google] system used a neural network — a collection of machines arranged to mimic a human brain — to create its unnervingly human responses. Such networks use a huge amount of computing power and Google’s latest research shows their potential for improving how machines hold conversations.”

Apparently, it was the addition of the movie scripts that made the bot more contentious. Consider this exchange on morality:

“Human: I really like our discussion on morality and ethics …

Machine: And how I’m not in the mood for a philosophical debate.

Human: What do you like to talk about?

Machine: Nothing.”

Fun with programming. All of this points to the old adage, “Junk in is junk out.” In The Lightstream Chronicles the future version of this mischief is called twisting. Basically you take a perfectly good, well-behaved, synthetic human and put in some junk. The change in programming is generally used to make these otherwise helpful synths do criminal things.

The logo says it all.

This tendency we have as human beings to twist good ideas into bad ones is nothing new, and today’s headlines are evidence of it. We print guns with 3D printers, we use drones to vandalize, cameras to spy, and computers to hack. Perhaps that is what Humanity+ has in mind: Make humanity more technologically advanced. More like a… machine, then reprogram the humanness (that just leads to no good) out. What could possibly go wrong with that?

 

1 https://en.wikipedia.org/wiki/Transhumanism
Bookmark and Share

Breathing? There’s an app for that.

As the Internet of Things (IoT) and Ubiquitous Computing (UbiComp) continue to advance there really is no more room left for surprise. These things are cascading out of Silicon Valley, crowd-funding sites, labs, and start-ups with continually accelerating speed. And like Kurzweil, I think it’s happening faster than 95 percent of the world is expecting. A fair number of these are duds and frankly superfluous attempts at “computing” what otherwise, with a little mental effort, we could do on our own. Ian Bogost’s article, this week in the Atlantic Monthly,The Internet of Things You Don’t Really Need points out how many of these “innovations” are replacing just the slightest amount of extra brain power, ever-so-minimal physical activity, or prescient concentration. Not to mention that these apps just supply another entry into your personal, digital footprint. More in the week’s news (this stuff is happening everywhere) this time in FastCompany, an MIT alumn who is concerned about how little “face time” her kids are getting with real humans because they are constantly in front of screens or tablets. (Human to human interaction is important for development of emotional intelligence.) The solution? If you think it is less time on the tablet and more “go out and play”, you are behind the times. The researcher, Rana el Kaliouby, has decided that she has the answer:

“Instead, she believes we should be working to make computers more emotionally intelligent. In 2009, she cofounded a company called Affectiva, just outside Boston, where scientists create tools that allow computers to read faces, precisely connecting each brow furrow or smile line to a specific emotion.”

Of course it is. Now, what we don’t know, don’t want to learn (by doing), or just don’t want to think about, our computer, or app, will do for us. The FastCo author Elizabeth Segran, interviewed el Kaliouby:

“The technology is able to deduce emotions that we might not even be able to articulate, because we are not fully aware of them,” El Kaliouby tells me. “When a viewer sees a funny video, for instance, the Affdex might register a split second of confusion or disgust before the viewer smiles or laughs, indicating that there was actually something disturbing to them in the video.”

Oh my.

“At some point in the future, El Kaliouby suggests fridges might be equipped to sense when we are depressed in order to prevent us from binging on chocolate ice cream. Or perhaps computers could recognize when we are having a bad day, and offer a word of empathy—or a heartwarming panda video.”

Please no.

By the way, this is exactly the type of technology that is at the heart of the mesh, the ubiquitous surveillance system in The Lightstream Chronicles. In addition to having learned every possible variation of human emotion, this software has also learned physical behavior such that it can tell when, or if someone is about to shoplift, attack, or threaten another person. It can even tell if you have any business being where you are or not.

So,  before we get swept up in all of the heartwarming possibilities for relating to our computers, (shades of Her), and just in case anyone is left who is alarmed at becoming a complete emotional, intellectual and physical muffin, there is significant new research that suggests that the mind is a muscle. You use it or lose it, that you can strengthen learning and intelligence by exercising and challenging your mind and cognitive skills. If my app is going remind me not to be rude, when to brush my teeth, drink water, stop eating, and go to the toilet, what’s left? The definition of post-human comes to mind.

As a designer, I see warning flags. It is precisely a designer’s ability for abstract reasoning that makes problem solving both gratifying and effective. Remember McGyver? You don’t have to, your life hacks app will tell you what you need to do. You might also want to revisit a previous blog on computers that are taking our jobs away.

macgyver
McGyver. If you don’t know, you’re going to have to look it up.

Yet, it would seem that many people think that the only really important human trait is happiness, that ill-defined, elusive, and completely arbitrary emotion. As long as we retain that, all those other human traits we should evolve out of anyway.

What do you think?

Bookmark and Share

Falling asleep in the future. 

Prologues to Season 4 : : The Lighstream Chronicles : : Dreamstate

Season 4 Prologue ix-x: Backstory

Every now an then it makes sense to keep readers updated on the scientific and technological developments that were both behavioral and cultural influences in the 22nd century. This addendum could to the 2159backstory link on The LIghtstream Chronicles site.

It wasn’t until 2047 that technological manipulation of the body’s endocrine system became commonplace. Prior to that, pharmaceuticals were the primary mode of stimulating hormone production in the body, but that solution never seemed to alleviate the side effects that so often accompanied pharma-based protocols. Nevertheless, it was the the well funded pharmaceutical industry, perhaps seeing the writing-on-the-wall that helped to pioneer the chips that ultimately became the regulators that enabled precision balance of the body’s chemistry.

Implanting chips into the body was in full swing by the late 2020’s but and this often meant that the body required numerous implants to balance and regulate different processes. The chemchip as it was called in 2047 was the first handle multiple functions. Chip#5061189 (the original first device was about the size of a postage stamp and was inserted below the skin in the lumbar region of the back. From here, it was able to trigger or inhibit the adrenal glands, hypothalamus, ovaries, pancreas, parathyroid, pineal gland, pituitary gland, testes, thymus and thyroid. Programs were written and updated seamlessly to coincide with various life stages and individual preferences. These early implants had a significant affect on overall health and wellness.

Gradually however, these chips required maintenance and did not work in synergy with other chipsets that were becoming prevalent throughout the body. A series of technological developments over the next 12 to 15 years began to consolidate individual chip functions into what became known as the chipset. You can read more about how the chipset works.

 

dreamstate
Just relax, we’ll take it from here.

Of course, technology marches on, so by the 22nd century the augmented human is an extremely sophisticated combination of technology builds on a “natural” human elements. Hence, we have the sleep program. This can be anything the user wants it to be from floating weightless in an imagined, liquid, greenspace to a field of tall grass. Then, regulation of the the body chemistry can manipulate the body chemistry and trick the body into thinking it has had 8 hours of sleep in only 3. Think of how much more work you could get done.

Bookmark and Share

The Finale to Season 3.

The idea of tapping into someone’s memories has been discussed more than once over the years (like this). Remember that the medical erasure process has already been recommended for Sean since he was very badly beaten and raped. Medical erasure could wipe this from his memory and after the scars have healed there would be no trace of the trauma mentally or physically. Of course, this procedure has not taken place yet, so Keiji is able to, through the superconductivity of the regen pod, tap into some of Sean’s latent, near term memories. As we see, however, they are fairly sketchy.

If we stop to think about our own memory, it rarely plays back as a continuous movie. It’s more like quick edits of what we saw or said and almost never includes audio, yet audio can often play a major role in triggering us to remember places, people and things. It is interesting to contemplate that accessing our memories from the outside, might just include audio and more.

We shouldn’t be surprised that Keiji isn’t getting a clearer picture though he could if he could get direct physical contact with Sean. By so doing, he could, theoretically, scan right through the event in its entirety. The complication here is that there was evidence that Sean was headjacked, (another topic I have blogged about numerous times), depending on the quality of the device and the trauma involved those memories may or may not be in tact. In fact, Sean’s memory could already be a disconnected pile of snippets not unlike the event we have just witnessed. He may not even know his name.

Season 4 begins next week April 10th and it kicks off with 4 double-page spreads in the Prologues section. Interesting stuff about the world of 2159, maybe some clues, maybe some foreshadowing.

Bookmark and Share

Reader thoughts.

This week I’m posing some questions. I know the traffic here is only a fraction of The Lightstream Chronicles but we’ll give it a shot? Answer one, all, any or add your own?

1. How’s the story progressing for you?

2. Would you prefer publishing every two weeks as a “spread, double” page, or stay with the single page format?

3. Have a favorite character?

4. Got a guess on whodunnit or is it too early to tell?

5. I’d describe the blog content now as part design fiction, futurist blather,  part behind the scenes (making of), and part backstory. Do you have a preference?

Hope we get some response on this. If not, I will continue to probe…

introgif

Bookmark and Share

On Worldbuilding and the graphic novel

Some cursory research into the term worldbuilding will provide the description for an exercise in constructing a different world than the one we live in. It could take on the aspects of fantasy such as the world of The Lord of the Rings trilogy, or the role-playing game of Dungeons and Dragons, or it could be a fictional universe akin to the worlds of the Star Wars series of movies and books. In fact, any imaginary world, past or present, could qualify for the worldbuilding description. Whatever genre it assumes, good worldbuilding requires a significant amount of thought. Things like culture, politics, technology, social issues, health, and even human interaction are things to be considered and crafted. Since the author is creating a fictional universe and establishing all the rules, I really can’t imagine a science fiction writer doing anything less to assemble a coherent story.

I wrote The Lightstream Chronicles in the spring of 2011, originally as a screenplay, and then converted it into a graphic novel script shortly thereafter. As part of the exercise, I created a timeline that brought the world from 2011 to 2159 taking into account, (broadly at first then gradually adding detail) the geopolitical environment, technology, tools, society, culture and even some wild cards thrown in. Much of this appears in the first few episodes (pages) of the story (Season 1) but considerably more detail is available by accessing the backstory link on the LSC site. Nevertheless, since the production of all the episodes is still in the works, the process of worldbuilding continues as I sort out increasing levels of minutiae as it applies to all of the above.

A key motivating factor in my creative process is also the center of my research, namely how design and technology affect us as human beings. Design affects culture and culture affects design. Because culture is a hefty composite of our beliefs, behaviors, hopes, dreams, and humanity, it is my assertion that design and its conjoined twin technology, in many ways are becoming the primary sculptors of our culture.

I’ve come to view some version of the worldbuilding exercise as almost a prerequisite to design. If designecnology does have such a profound impact on culture and all of its entanglements, can design really afford to move into the future without considering these larger implications?

Perhaps this is something for my next academic paper.

Bookmark and Share

Recognition technology. We know who you are and maybe what you are thinking about.

New technologies are everywhere. They are being developed in labs every day—if not every ten minutes. If you are searching for them, like me, then you are likely to run across hundreds of techy developments that are on the cusp of being something mainstream within the next 10 years. Then, there are those technologies that we never hear about but that are fairly well developed, except that, as a society we’re not ready for them. So they sit in a lab until other developments come to pass or the marketing department decides that there is a high enough percentage of the population that will use or even accept them.

There is a great scene in the 2002 movie Minority Report where John Anderton (Tom Cruise) walks into a Gap store. Immediately upon entering, his irises are scanned and the resident hologram begins to make suggestions based upon his purchasing preferences. In the movie, Cruise has just had his eyes swapped out with someone else to disguise his identity. So the virtual sales person thinks he is Mr. Yakamoto.

That movie is 13 years old. Today, iris scan recognition is already widely in use and in case you missed it, retinal scanning is now obsolete. The United Arab Emirates uses it at border crossings, India has begun enrolling its 1.2 billion citizens by capturing individual iris data, and in at least a half dozen applications for security around the world. It’s only current drawback is that you have to be standing still and fairly close the scanner for an accurate read. 1

Fear not, however because for people moving about and not standing still there is facial recognition which is much less picky about the quality of the scan, or in this case, the image. Facial recognition algorithms have improved dramatically over the years now logging 16,384 reference points which are referenced against a database and, fairly quickly can identify a person with 80 -90% accuracy. Higher accuracy rates just take a bit longer. 2 Right now its in use by law enforcement in airports and high security areas, but also at retail locations to catch shoplifters. Now it gets interesting because, while we fine-tune the iris scan, the same facial recognition system that is used to identify ne’er do wells can also be used a la Minority Report to identify shoppers who are regular customers, or help them find the lingerie department. A quick cross-reference with their online shopping habits, Facebook page and their Google history can also tell them how much you are likely to spend, your favorite color, and the name of your best friend to remind you that their birthday is right around the corner.

Putting this in context with what we’ve seen in the last few weeks of The Lightstream Chronicles, the idea that Keiji-T, with access to someone’s memories can ascertain their guilt or innocence is a logical next step. Too far, you think? Brain implants are already in testing that can implant memories 3 and augment decisions. Commonplace in the year 2159, perhaps.

 

1 http://en.wikipedia.org/wiki/Iris_recognition#Deployed_applications
2 http://www.fastcompany.com/3040375/is-facial-recognition-the-next-privacy-battleground
3 http://israelbrain.org/will-human-memory-chips-change-the-world-by-dr-ofir-levi/
Bookmark and Share