Tag Archives: technology

Logical succession, please.

In this blog, I wouldn’t be surprised to discover that of all the people I talk (or rant) about most is Ray Kurzweil. That is not all that surprising to me since he is possibly the most visible and vociferous and visionary proponent of the future. Let me say in advance that I have great respect for Ray. A Big Think article three years ago claimed that
“… of the 147 predictions that Kurzweil has made since the 1990’s, fully 115 of them have turned out to be correct, and another 12 have turned out to be “essentially correct” (off by a year or two), giving his predictions a stunning 86% accuracy rate.”

Last year Kurzweil predicted that
“ In the 2030s… we are going to send nano-robots into the brain (via capillaries) that will provide full immersion virtual reality from within the nervous system and will connect our neocortex to the cloud. Just like how we can wirelessly expand the power of our smartphones 10,000-fold in the cloud today, we’ll be able to expand our neocortex in the cloud.”1

This prediction caught my attention as not only quite unusual but, considering that it is only 15 years away, incredibly ambitious. Since 2030 is right around the corner, I wanted to see if anyone has been able to connect to the neocortex yet. Before I could do that, however, I needed to find out what exactly the neocortex is. According to Science Daily, it is the top layer of the brain (which is made up of six layers). “It is involved in higher functions such as sensory perception, generation of motor commands, spatial reasoning, conscious thought, and in humans, language.”2 According to Kurzweil, “There is beauty, love and creativity and intelligence in the world, and it all comes from the neocortex.”3

OK, so on to how we connect. Kurzweil predicts nanobots will do this though he doesn’t say how. Nanobots, however, are a reality. Scientists have designed nanorobotic origami, which can fold itself into shapes on the molecular level and molecular vehicles that are drivable. Without additional detail, I can only surmise that once our nano-vehicles have assembled themselves, they will drive to the highest point and set up an antenna and, violå, we will be linked.

 

Neurons of the Neocortex stained with golgi’s methode - Photograph: Benjamin Bollmann
Neurons of the Neocortex stained with golgi’s methode – Photograph: Benjamin Bollmann

I don’t let my students get away with predictions like that, so why should Kurzweil? Predictions should engage more than just existing technologies (such as nanotech and brain mapping); they need demonstrate plausible breadcrumbs that make such a prediction legitimate. Despite the fact that Ray gives a great TED talk, it still didn’t answer those questions. I’m a big believer that technological convergence can foster all kinds of unpredictable possibilities, but the fact that scientists are working on a dozen different technological breakthroughs in nanoscience, bioengineering, genetics, and even mapping the connections of the neocortex4, doesn’t explain how we will tap into it or transmit it.

If anyone has a theory on this, please join the discussion.

1. http://bigthink.com/endless-innovation/why-ray-kurzweils-predictions-are-right-86-of-the-time
2. http://www.sciencedaily.com/terms/neocortex.htm
3. http://www.dailymail.co.uk/sciencetech/article-3257517/Human-2-0-Nanobot-implants-soon-connect-brains-internet-make-super-intelligent-scientist-claims.html#ixzz3xtrHUFKP
4. http://www.neuroscienceblueprint.nih.gov/connectome/

Photo from: http://connectomethebook.com/?portfolio=neurons-of-the-neocortex

Bookmark and Share

Enter the flaw.

 

I promised a drone update this week, but by now, it is probably already old news. It is a safe bet there are probably a few thousand more drones than last week. Hence, I’m going to shift to a topic that I think is moving even faster than our clogged airspace.

And now for an AI update. I’ve blogged previously about Kurzweil’s Law of Accelerating Returns, but the evidence is mounting every day that he’s probably right.  The rate at which artificial intelligence is advancing is beginning to match nicely with his curve. A recent article on the Txchnologist website demonstrates how an AI system called Kulitta, is composing jazz, classical, new age and eclectic mixes that are difficult to tell from human compositions. You can listen to an example here. Not bad actually. Sophisticated AI creations like this underscore the realization that we can no longer think of robotics as the clunky mechanized brutes. AI can create. Even though it’s studying an archive of man-made creations the resulting work is unique.

First it learns from a corpus of existing compositions. Then it generates an abstract musical structure. Next it populates this structure with chords. Finally, it massages the structure and notes into a specific musical framework. In just a few seconds, out pops a musical piece that nobody has ever heard before.

The creator of Kulitta, Donya Quick says that this will not put composers out of a job, it will help them do their job better. She doesn’t say how exactly.

If even trained ears can’t always tell the difference, what does that mean for the masses? When we can load the “universal composer” app onto our phone and have a symphony written for ourselves, how will this serve the interests of musicians and authors?

The article continues:

Kulitta joins a growing list of programs that can produce artistic works. Such projects have reached a critical mass–last month Dartmouth College computational scientists announced they would hold a series of contests. They have put a call out seeking artificial intelligence algorithms that produce “human-quality” short stories, sonnets and dance music. These will be pitted against compositions made by humans to see if people can tell the difference.

The larger question to me is this: “When it all sounds wonderful or reads like poetry, will it make any difference to us who created it?”

Sadly, I think not. The sweat and blood that composers and artists pour into their compositions could be a thing of the past. If we see this in the fine arts, then it seems an inevitable consequence for design as well. Once the AI learns the characters, behaviors and personalities of the characters in The Lightstream Chronicles, it can create new episodes without me. Taking characters and setting that already exist as CG constructs, it’s not a stretch that it will be able to generate the wireframes, render the images, and layout the panels.

Would this app help me in my work? It could probably do it in a fraction of the time that it would take me, but could I honestly say it’s mine?

When art and music are all so easily reconstructed and perfect, I wonder if we will miss the flaw. Will we miss that human scratch on the surface of perfection, the thing that reminds us that we are human?

There is probably an algorithm for that, too. Just go to settings > humanness and use the slider.

Bookmark and Share

The killer feature for every app.

I have often asked the question: If we could visit the future “in-person” how would it affect us upon our return? How vigorously would we engage our redefined present? Part of the idea behind design fiction, for me, is making the future seem real enough to us that we want to discuss it and ask ourselves is this the future we want. If not, what can we do about it, how might it be changed, refined, or avoided altogether? In a more pessimistic light, I also wonder whether anything could be real enough to rouse us from our media-induced stupor. And the potion is getting stronger.

After Monday and Tuesday this week I was beginning to think it would be a slow news week in the future-tech sector. Not so. (At least I didn’t stumble on to them until Wednesday.)

1. Be afraid.

A scary new novel is out called Ghost Fleet. It sounds immensely entertaining, but also ominously possible. It harkens back to some of my previous blogs on autonomous weapons and the harbinger of ubiquitous hacking. How am I going to get time to read this? That’s another issue.

2. Play it again.

Google applied for this years ago, but their patent on storing “memories” was approved this week. It appears as though it would have been a feature for the ill-fated Google Glass but could easily be embedded in any visual recording function from networked cameras to a user’s contact lens. Essentially it lets you “play-back” whatever you saw, assuming you are wearing or integrating the appropriate recording device, or software. “Siri, replay my vacation!” I must admit it sounds cool.

Ghost Fleet, Google memories, uber hacking, Thync.
Ghost Fleet, Google memories, uber hacking, and Thync.

3. Hack-a-mania.

How’s this for a teaser? RESEARCHERS HACKED THE BRAKES OF A CORVETTE WITH TEXT MESSAGES. That’s what Fast Company threw out there on Wednesday, but it originated with WIRED magazine. It’s the latest since the Jeep-Jacking incident just weeks ago. See how fast technology moves? In that episode the hackers, or jackers, whatever, used their laptops to control just about every technology the Jeep had available. However, according to WIRED,

“…a new piece of research suggests there may be an even easier way for hackers to wirelessly access those critical driving functions: Through an entire industry of potentially insecure, internet-enabled gadgets plugged directly into cars’ most sensitive guts.”

In this instance,

“A 2-inch-square gadget that’s designed to be plugged into cars’ and trucks’ dashboards and used by insurance firms and trucking fleets to monitor vehicles’ location, speed and efficiency.”

The article clearly demonstrates that these devices are vulnerable to attack, even in government vehicles and, I presume the White House limo as well. You guys better get to work on that.

4. Think about this.

A new $300 device called Thync is now available to stick on your forehead to either relax or energize you through neurosignaling, AKA  electricity, that zaps your brain “safely”. It’s not unrelated to the less sexy shock therapy of ages past. Reports tell me that this is anything but all figured out, but just like the above list, it’s just a matter of time until it escalates to the next level.

So what ties all these together? If we look at the historical track of technology, the overarching theme is convergence. All the things that once were separate have now converged. Movies, texts, phone calls, games, GPS, bar-code scanning, cameras and about a thousand other technologies have converged into your phone or your laptop, or tablet. It is a safe bet to see that this trend will continue, in addition to getting smaller and eventually implanted. Isn’t technology wonderful?

The only problem is that we have yet to figure out the security issues. Do we, for one moment, think that hacking will go away? We rush new apps and devices to market with a “We’ll fix that later,” mentality. It’s just a matter of time until your energy, mood, “memories”, or our national security are up for grabs. Seems like security ought to be on the feature list of every new gadget, especially the ones that access out bodies, our safety, or our information. That’s pretty much everything, by the way. The idea is especially important because, let’s face it, everything we think is secure, isn’t.

Bookmark and Share

The robo-apocalypse. Part 1.

Talk of robot takeovers is all the rage right now.

I’m good with this because the evidence is out there that robots will continue to get smarter and smarter but the human condition, being what it is, we will continue to do stupid s**t. Here are some examples from the news this week.

1. The BBC reported this week that South Korea has deployed something called The Super aEgis II, a 50-caliber robotic machine gun that knows who is an enemy and who isn’t. At least that’s the plan. The company that built and sells the Super aEgis is DoDAAM. Maybe that is short for do damage. The BBC astutely notes,

“Science fiction writer Isaac Asimov’s First Law of Robotics, that ‘a robot may not injure a human being or, through inaction, allow a human being to come to harm’, looks like it will soon be broken.”

Asimov was more than a great science-fiction writer, he was a Class A futurist. He clearly saw the potential for us to create robots that were smarter and more powerful than we are. He figured there should be some rules. Asimov used the kind of foresight that responsible scientists, technologists and designers should be using for everything we create. As the article continues, Simon Parkin of the BBC quotes Yangchan Song, DoDAAM’s managing director of strategy planning.

“Automated weapons will be the future. We were right. The evolution has been quick. We’ve already moved from remote control combat devices, to what we are approaching now: smart devices that are able to make their own decisions.”

Or in the words of songwriter Donald Fagen,

“A just machine to make big decisions
Programmed by fellows with compassion and vision…1

Relax. The world is full of these fellows. Right now the weapon/robot is linked to a human who gives the OK to fire, and all customers who purchased the 30 units thus far have opted for the human/robot interface. But the company admits,

“If someone came to us wanting a turret that did not have the current safeguards we would, of course, advise them otherwise, and highlight the potential issues,” says Park. “But they will ultimately decide what they want. And we develop to customer specification.”

A 50 caliber round. Heavy damage.
A 50 caliber round. Heavy damage.

They are currently working on the technology that will help their machine make the right decision on its own., but the article cites several academics and researchers who see red flags waving. Most concur that teaching a robot right from wrong is no easy task. Compound the complexity because the fellows who are doing the programming don’t always agree on these issues.

Last week I wrote about Google’s self-driving car. Of course, this robot has to make tough decisions too. It may one day have to decide whether to hit the suddenly appearing baby carriage, the kid on the bike, or just crash the vehicle. In fact, Parkin’s article brings Google into the picture as well, quoting Colin Allen,

“Google admits that one of the hardest problems for their programming is how an automated car should behave at a four-way stop sign…”

Humans don’t do such a good job at that either. And there is my problem with all of this. If the humans who are programming these machines are still wrestling with what is ethically right or wrong, can a robot be expected to do better. Some think so. Over at DoDAMM,

“Ultimately, we would probably like a machine with a very sound basis to be able to learn for itself, and maybe even exceed our abilities to reason morally.”

Based on what?

Next week: Drones with pistols.

 

1. Donald Fagen, IGY From the Night Fly album. 1982
Bookmark and Share

Promises. Promises.

Throughout the course of the week, usually on a daily basis, I collect articles, news blurbs and what I call “signs from the future.” Mostly they fall into categories such as design fiction, technology, society, future, theology, and philosophy. I use this content sometimes for this blog, possibly for a lecture but most often for additional research as part of scholarly papers and presentations that are a matter of course as a professor. I have to weigh what goes into the blog because most of these topics could easily become full-blown papers.  Of course, the thing with scholarly writing is that most publications demand exclusivity on publishing your ideas. Essentially, that means that it becomes difficult to repurpose anything I write here for something with more gravitas.  One of the subjects that are of growing interest to me is Google. Not the search engine, per se, rather the technological mega-corp. It has the potential to be just such a paper, so even though there is a lot to say, I’m going to land on only a few key points.

A ubiquitous giant in the world of the Internet, Google has some of the most powerful algorithms, stores your most personal information, and is working on many of the most advanced technologies in the world. They try very hard to be soft-spoken, and low-key, but it belies their enormous power.

Most of us would agree that technology has provided some marvelous benefits to society especially in the realms of medicine, safety, education and other socially beneficial applications. Things like artificial knees, cochlear implants, air bags (when they don’t accidentally kill you), and instantaneous access to the world’s libraries have made life-changing improvements. Needless to say, especially if you have read my blog for any amount of time, technology also can have a downside. We may see greater yields from our agricultural efforts, but technological advancements also pump needless hormones into the populace, create sketchy GMO foodstuffs and manipulate farmers into planting them. We all know the problems associated with automobile emissions, atomic energy, chemotherapy and texting while driving. These problems are the obvious stuff. What is perhaps more sinister are the technologies we adopt that work quietly in the background to change us. Most of them we are unaware of until, one day, we are almost surprised to see how we have changed, and maybe we don’t like it. Google strikes me as a potential contributor in this latter arena. A recent article from The Guardian, entitled “Where is Google Taking Us?” looks at some of their most altruistic technologies (the ones they allowed the author to see). The author, Tim Adams, brought forward some interesting quotes from key players at Google. When discussing how Google would spend some $62 million in cash that it had amassed, Larry Page, one of the company’s co-founders asked,

“How do we use all these resources… and have a much more positive impact on the world?”

There’s nothing wrong with that question. It’s the kind of question that you would want a billionaire asking. My question is, “What does positive mean, and who decides what is and what isn’t?” In this case, it’s Google. The next quote comes from Sundar Pichai. With so many possibilities that this kind of wealth affords, Adams asked how they stay focused on what to do next.

“’Focus on the user and all else follows…We call it the toothbrush test,’ Pichai says, ‘we want to concentrate our efforts on things that billions of people use on a daily basis.’”

The statement sounds like savvy marketing. He is also talking about the most innate aspects of our everyday behavior. And so that I don’t turn this into an academic paper, here is one more quote. This time the author is talking to Dmitri Dolgov, principal engineer for Google Self-Driving Cars. For the whole idea to work, that is, the car reacting like a human would, only better, it has to think.

“Our maps have information stored and as the car drives around it builds up another local map with its sensors and aligns one to the other – that gives us a location accuracy of a centimetre or two. Beyond that, we are making huge numbers of probabilistic calculations every second.”

Mapping everything down to the centimeter.
Mapping everything down to the centimeter.

It’s the last line that we might want to ponder. Predictive algorithms are what artificial intelligence is all about, the kind of technology that plugs-in to a whole host of different applications from predicting your behavior to your abilities. If we don’t want to have to remember to check the oil, there is a light that reminds us. If we don’t want to have to remember somebody’s name, there is a facial recognition algorithm to remember for us. If my wearable detects that I am stressed, it can remind me to take a deep breath. If I am out for a walk, maybe something should mention all the things I could buy while I’m out (as well as what I am out of). 

Here’s what I think about. It seems to me that we are amassing two lists: the things we don’t want to think about, and the things we do. Folks like Google are adding things to Column A, and it seems to be getting longer all the time. My concern is whether we will have anything left in Column B.

 

Bookmark and Share

On Worldbuilding and the graphic novel

Some cursory research into the term worldbuilding will provide the description for an exercise in constructing a different world than the one we live in. It could take on the aspects of fantasy such as the world of The Lord of the Rings trilogy, or the role-playing game of Dungeons and Dragons, or it could be a fictional universe akin to the worlds of the Star Wars series of movies and books. In fact, any imaginary world, past or present, could qualify for the worldbuilding description. Whatever genre it assumes, good worldbuilding requires a significant amount of thought. Things like culture, politics, technology, social issues, health, and even human interaction are things to be considered and crafted. Since the author is creating a fictional universe and establishing all the rules, I really can’t imagine a science fiction writer doing anything less to assemble a coherent story.

I wrote The Lightstream Chronicles in the spring of 2011, originally as a screenplay, and then converted it into a graphic novel script shortly thereafter. As part of the exercise, I created a timeline that brought the world from 2011 to 2159 taking into account, (broadly at first then gradually adding detail) the geopolitical environment, technology, tools, society, culture and even some wild cards thrown in. Much of this appears in the first few episodes (pages) of the story (Season 1) but considerably more detail is available by accessing the backstory link on the LSC site. Nevertheless, since the production of all the episodes is still in the works, the process of worldbuilding continues as I sort out increasing levels of minutiae as it applies to all of the above.

A key motivating factor in my creative process is also the center of my research, namely how design and technology affect us as human beings. Design affects culture and culture affects design. Because culture is a hefty composite of our beliefs, behaviors, hopes, dreams, and humanity, it is my assertion that design and its conjoined twin technology, in many ways are becoming the primary sculptors of our culture.

I’ve come to view some version of the worldbuilding exercise as almost a prerequisite to design. If designecnology does have such a profound impact on culture and all of its entanglements, can design really afford to move into the future without considering these larger implications?

Perhaps this is something for my next academic paper.

Bookmark and Share

Graphic Novel: Part Design Fiction Part Eye Candy

The clock continues to tick on the Kickstarter Campaign for my science fiction, crime thriller The Lightstream Chronicles and I’m still looking for that viral pill that will put the kick into the campaign. My biggest supporters jumped in early but things have tapered off. Nevertheless, I am still hopeful. The number of downloads of chapter 1 has already exceeded 200 which is close to two-thirds of the people who were part of the original email campaign. The reviews have been 100 percent positive thus far. Though none of the media outlets have responded to my press releases, there is still time.

The entire book will be rendered in CG and as you can see from these images, there is plenty to look at. These images are only 1/3 the actual size. So browsing through the pages can be a real treat with lots of detail and perhaps some clues of things to come. The story is a thought-provoking look into the techno-human future, and it is also an extended work of design fiction. You can read all about that in previous blogs. Enjoy these images and if you want more make sure to hop over to Kickstarter and become a backer.

Bookmark and Share