Tag Archives: predictive algorithms

Now I know that Kurzweil is right.

 

In a previous blog entitled “Why Kurzweil is probably right,” I made this statement,

“Convergence is the way technology leaps forward. Supporting technologies enable formerly impossible things to become suddenly possible.”

That blog was talking about how we are developing AI systems at a rapid pace. I quoted a WIRED magazine article by David Pierce that was previewing consumer AIs already in the marketplace and some of the advancements on the way. Pierce said that a personal agent is,

“…only fully useful when it’s everywhere when it can get to know you in multiple contexts—learning your habits, your likes and dislikes, your routine and schedule. The way to get there is to have your AI colonize as many apps and devices as possible.”

Then, I made my usual cautionary comment about how such technologies will change us. And they will. So, if you follow this blog, you know that I throw cold water onto technological promises as a matter of course. I do this because I believe that someone has to.

Right now I’m preparing my collaborative design studio course. We’re going to be focusing on AR and VR, but since convergence is an undeniable influence on our techno-social future, we will have to keep AI, human augmentation, the Internet of Things, and a host of other emerging technologies on the desktop as well. In researching the background for this class, I read three articles from Peter Diamandis for the Singularity Hub website. I’ve written about Peter before, as well. He’s brilliant. He’s also a cheerleader for the Singularity. So that being said, these articles, one on the Internet of Everything (IoE/IoT), Artificial Intelligence (AI), and another on Augmented and Virtual Reality (AR/VR), are full of promises. Most of what we thought of as science fiction, even a couple of years ago are now happening with such speed that Diamandis and his cohorts believe they are imminent in only three years. And by that I mean commonplace.

If that isn’t enough for us to sit up and take notice, then I am reminded of an article from the Silicon Valley Business Journal, another interview with Ray Kurzweil. Kurzweil, of course, has pretty much convinced us all by now that the Law of Accelerating Returns is no longer hyperbole. If anyone thought that it was only hype, sheer observation should have brought them to their senses. In this article,
Kurzweil gives this excellent illustration of how exponential growth actually plays out—no longer as a theory but—as demonstrable practice.

“Exponentials are quite seductive because they start out sub-linear. We sequenced one ten-thousandth of the human genome in 1990 and two ten-thousandths in 1991. Halfway through the genome project, 7 ½ years into it, we had sequenced 1 percent. People said, “This is a failure. Seven years, 1 percent. It’s going to take 700 years, just like we said.” Seven years later it was done because 1 percent is only seven doublings from 100 percent — and it had been doubling every year. We don’t think in these exponential terms. And that exponential growth has continued since the end of the genome project. These technologies are now thousands of times more powerful than they were 13 years ago when the genome project was completed.”

When you combine that with the nearly exponential chaos of hundreds of other converging technologies, indeed the changes to our world and behavior are coming at us like a bullet-train. Ask any Indy car driver, when things are happening that fast, you have to be paying attention.
But when the input is like a firehose and the motivations are unknown, how on earth do we do that?

Personally, I see this as a calling for design thinkers worldwide. Those in the profession, schooled in the ways of design thinking have been espousing our essential worth to realm of wicked problems for some time now. Well, problems don’t get more wicked than this.

Maybe we can design an AI that could keep us from doing stupid things with technologies that we can make but cannot yet comprehend the impact of.

Bookmark and Share

Big-Data Algorithms. Don’t worry. Be happy.

 

It’s easier for us to let the data decide for us. At least that is the idea behind global digital design agency Huge. Aaron Shapiro is the CEO. He says, “The next big breakthrough in design and technology will be the creation of products, services, and experiences that eliminate the needless choices from our lives and make ones on our behalf, freeing us up for the ones we really care about: Anticipatory design.”

Buckminster Fuller wrote about Anticipatory Design Science, but this is not that. Trust me. Shapiro’s version is about allowing big data, by way of artificial intelligence and neural networks, to become so familiar with us and our preferences that it anticipates what we need to do next. In this vision, I don’t have to decide what to wear, or eat, or how to get to work, or when to buy groceries, or gasoline, what color trousers go with my shoes and also when it’s time to buy new shoes. No decisions will be necessary. Interestingly, Shapiro sees this as a good thing. The idea comes from a flurry of activity about something called decision fatigue. What is that? In a nutshell, it says that our decision-making capacity is a reservoir that gradually gets depleted the more decisions we make, possibly as a result of body chemistry. After a long string of decisions, according to the theory, we are more likely to make a bad decision or none at all. Things like willpower disintegrate along with our decision-making.

Among the many articles in the last few months on this topic was FastCompany, who wrote that,

“Anticipatory design is fundamentally different: decisions are made and executed on behalf of the user. The goal is not to help the user make a decision, but to create an ecosystem where a decision is never made—it happens automatically and without user input. The design goal becomes one where we eliminate as many steps as possible and find ways to use data, prior behaviors and business logic to have things happen automatically, or as close to automatic as we can get.”

Supposedly this frees “us up for the ones we really care about.”
My questions are, who decides which questions are important? And once we are freed from making decisions, will we even know that we have missed on that we really care about?

Google Now is a digital assistant that not only responds to a user’s requests and questions, but predicts wants and needs based on search history. Pulling flight information from emails, meeting times from calendars and providing recommendations of where to eat and what to do based on past preferences and current location, the user simply has to open the app for their information to compile.”

It’s easy to forget that AI as we currently know it goes under the name of Facebook or Google or Apple or Amazon. We tend to think of AI as some ghostly future figure or a bank of servers, or an autonomous robot. It reminds me a bit of my previous post about Nick Bostrom and the development of SuperIntelligence. Perhaps it is a bit like an episode of Person of Interest. As we think about designing systems that think for us and decide what is best for us, it might be a good idea to think about what it might be like to no longer think—as long as we still can.

 

 

Bookmark and Share

Yes, you too can be replaced.

Over the past weeks, I have begun to look at the design profession and design education in new ways. It is hard to argue with the idea that all design is future-based. Everything we design is destined for some point beyond now where the thing or space, the communication, or the service will exist. If it already existed, we wouldn’t need to design it. So design is all about the future. For most of the 20th century and the last 16 years, the lion’s share of our work as designers has focused primarily on very near-term, very narrow solutions: A better tool, a more efficient space, a more useful user interface or satisfying experience. In fact, the tighter the constraints, the narrower the problem statement and greater the opportunity to apply design thinking to resolve it in an elegant and hopefully aesthetically or emotionally pleasing way. Such challenges are especially gratifying for the seasoned professional as they have developed almost an intuitive eye toward framing these dilemmas from which novel and efficient solutions result. Hence, over the course of years or even decades, the designer amasses a sort of micro scale, big data assemblage of prior experiences that help him or her reframe problems and construct—alone or with a team—satisfactory methodologies and practices to solve them.

Coincidentally, this process of gaining experience is exactly the idea behind machine learning and artificial intelligence. But, since computers can amass knowledge from analyzing millions of experiences and judgments it is theoretically possible that an artificial intelligence could gain this “intuitive eye” to a degree far surpassing the capacity of an individual him-or-her designer.

That is the idea behind a brash (and annoyingly self-conscious) article from the American Institute of Graphic Arts (AIGA) entitled Automation Threatens To Make Graphic Designers Obsolete. Titles like this are a hook. Of course. Designers, deep down assume that they can never be replaced. They believe this because inherent to the core of artificial intelligence there is a lack of understanding, empathy or emotional verve, so far. We saw this earlier in 2016 when an AI chatbot went Nazi because a bunch of social media hooligans realized that Tay (the name of the Microsoft chatbot) was in learn mode. If you told “her” Nazi’s were cool, she believed you. It was proof, again, that junk in is junk out.

The AIGA author Rob Peart pointed to AutoDesk’s Dreamcatcher software that is capable of rapid prototyping surprisingly creative albeit roughly detailed prototypes. Peart features a quote from an Executive Creative Director for techno-ad-agency Sapient Nitro. “A designer’s role will evolve to that of directing, selecting, and fine tuning, rather than making. The craft will be in having vision and skill in selecting initial machine-made concepts and pushing them further, rather than making from scratch. Designers will become conductors, rather than musicians.”

I like the way we always position new technology in the best possible light. “You’re not going to lose your job. Your job is just going to change.” But tell that to the people who used to write commercial music, for example. The Internet has become a vast clearing house for every possible genre of music. It’s all available for a pittance of what it would have taken a musician to write, arrange and produce a custom piece of music. It’s called stock. There are stock photographs, stock logos, stock book templates, stock music, stock house plans, and the list goes on. All of these have caused a significant disruption to old methods of commerce, and some would say that these stock versions of everything lack the kind of polish and ingenuity that used to distinguish artistic endeavors. The artist’s who’s jobs they have obliterated refer to the work with a four-letter word.

Now, I confess I have used stock photography, and stock music, but I have also used a lot of custom photography and custom music as well. Still, I can’t imagine crossing the line to a stock logo or stock publication design. Perish the thought! Why? Because they look like four-letter-words; homogenized, templates, and the world does not need more blah. It’s likely that we also introduced these new forms of stock commerce in the best possible light, as great democratizing innovations that would enable everyone to afford music, or art or design. That anyone can make, create or borrow the things that professionals used to do.

As artificial intelligence becomes better at composing music, writing blogs and creating iterative designs (which it already does and will continue to improve), we should perhaps prepare for the day when we are no longer musicians or composers but rather simply listeners and watchers.

But let’s put that in the best possible light: Think of how much time we’ll have to think deep thoughts.

 

Bookmark and Share

Superintelligence. Is it the last invention we will ever need to make?

I believe it is crucial that we move beyond preparation to adapt or react to the future but to actively engage in shaping it.

An excellent example of this kind of thinking is Nick Bostrom’s TED talk from 2015.

Bostrom is concerned about the day when machine intelligence exceeds human intelligence (the guess is somewhere between twenty and thirty years from now). He points out that, “Once there is super-intelligence, the fate of humanity may depend on what the super-intelligence does. Think about it: Machine intelligence is the last invention that humanity will ever need to make. Machines will then be better at inventing [designing] than we are, and they’ll be doing so on digital timescales.”

His concern is legitimate. How do we control something that is smarter than we are? Anticipating AI will require more strenuous design thinking than that which produces the next viral game, app, or service. But these applications are where the lion’s share of the money is going. When it comes to keeping us from being at best irrelevant or at worst an impediment to AI, Bostrom is guardedly optimistic about how we can approach it. He thinks we could, “[…]create an A.I. that uses its intelligence to learn what we value, and its motivation system is constructed in such a way that it is motivated to pursue our values or to perform actions that it predicts we would approve of.”

At the crux of his argument and mine: “Here is the worry: Making superintelligent A.I. is a really hard challenge. Making superintelligent A.I. that is safe involves some additional challenge on top of that. The risk is that if somebody figures out how to crack the first challenge without also having cracked the additional challenge of ensuring perfect safety.”

Beyond machine learning (which has many facets), there are a wide-ranging set of technologies, from genetic engineering to drone surveillance, to next-generation robotics, and even VR, that could be racing forward without someone thinking about this “additional challenge.”

This could be an excellent opportunity for designers. But, to do that, we will have to broaden our scope to engage with science, engineering, and politics. More on that in future blogs.

Bookmark and Share

Surveillance. Are we defenseless?

Recent advancements in AI that are increasing exponentially (in areas such as facial recognition) demonstrate a level of sophistication in surveillance that renders most of us indefensible. There is a new transparency, and virtually every global citizen is a potential microbe for scrutiny beneath the microscope. I was blogging about this before I ever set eyes on the CBS drama Person of Interest, but the premise that surveillance could be ubiquitous is very real. The series depicts a mega, master computer that sees everything, but the idea of gathering a networked feed of the world’s cameras and a host of other accessible devices into a central data facility where AI sorts, analyzes and learns what kind of behavior is potentially threatening, is well within reach. It isn’t even a stretch that something like it already exists.

As with most technologies, however, they do not exist in a vacuum. Technologies converge. Take, for example, a recent article in WIRED about how accurate facial recognition is becoming even when the subject is pixelated or blurred. A common tactic to obscure the identity of video witness or an innocent bystander is to blur or to pixelate their face; a favored technique of Google Maps. Just go to any big city street view and Google has systematically obscured license plates and faces. Today these methods no longer compete against state-of-the-art facial recognition systems.

The next flag is the escalating sophistication of hacker technology. One of the most common methods is malware. Through an email or website, malware can infect a computer and raise havoc. Criminals often use it to ransom a victim’s computer before removing the infection. But not all hackers are criminals, per se. The FBI is pushing for the ability to use malware to digital wiretap or otherwise infiltrate potentially thousands of computers using only a single warrant. Ironically, FBI Director James Comey recently admitted that he puts tape over the camera on his personal laptop. I wrote about this a few weeks back What does that say about the security of our laptops and devices?

Is the potential for destructive attacks on our devices is so pervasive that the only defense we have is duct tape? We can track as far back as Edward Snowden, the idea that the NSA can listen in on your phone even when it’s off. And since 2014, experts have confirmed that the technology exists. In fact, albeit sketchy, some apps purport to do exactly that. You won’t find them in the app store (for obvious reasons), but there are websites where you can click the “buy” button. According to the site Stalkertools.com, which doesn’t pass the legit news site test, (note the use of awesome) one these apps promises that you can:

• Record all phone calls made and received, hear everything being said because you can record all calls and even listen to them at a later date.
• GPS Tracking, see on a map on your computer, the exact location of the phone
• See all sites that are opened on the phone’s web browser
• Read all the messages sent and received on IM apps like Skype, Whatsapp and all the rest
• See all the passwords and logins to sites that the person uses, this is thanks to the KeyLogger feature.
• Open and close apps with the awesome “remote control” feature
• Read all SMS messages and see all photos send and received on text messages
• See all photos taken with the phone’s camera

“How it work” “ The best monitoring for protect family” — Yeah. Sketchy.
“How it work” “ The best monitoring for protect family” — Sketchy, you think?

I visited one of these sites (above) and, frankly, I would never click a button on a website that can’t form a sentence in English, and I would not recommend that you do either. Earlier this year, the UK Independent published an article where Kelli Burns, a mass communication professor at the University of South Florida, alleged that Facebook regularly listens to users phone conversations to see what people are talking about. Of course, she said she can’t be certain of that.

Nevertheless, it’s out there, and if it has not already happened eventually, some organization or government will find a way to network the access points and begin collecting information across a comprehensive matrix of data points. And, it would seem that we will have to find new forms of duct tape to attempt to manage whatever privacy we have left. I found a site that gives some helpful advice for determining whether someone is tapping your phone.

Good luck.

 

Bookmark and Share

Did we design this future?

 

I have held this discussion before, but a recent video from FastCompany reinvigorates a provocative aspect concerning our design future, and begs the question: Is it a future we’ve designed? It centers around the smartphone. There are a lot of cool things about our smartphones like convenience, access, connectivity, and entertainment, just to name a few. It’s hard to believe that Steve Jobs introduced the very first iPhone just nine years ago on June 29, 2007. It was an amazing device, and it’s no shocker that it took off like wildfire. According to stats site Statista, “For 2016, the number of smartphone users is forecast to reach 2.08 billion.” Indeed, we can say, they are everywhere. In the world of design futures, the smartphone becomes Exhibit A of how an evolutionary design change can spawn a complex system.

Most notably, there are the billions of apps that are available to users that promise a better way to calculate tips, listen to music, sleep, drive, search, exercise, meditate, or create. Hence, there is a gigantic network of people who make their living supplying user services. These are direct benefits to society and commerce. No doubt, our devices have also often saved us countless hours of analog work, enabled us to manage our arrivals and departures, and keep in contact (however tenuous) with our friends and acquaintances. Smartphones have helped us find people in distress and help us locate persons with evil intent. But, there are also unintended consequences, like legislation to keep us from texting and driving because these actions have also taken lives. There are issues with dependency and links to sleep disorders. Some lament the deterioration of human, one-on-one, face-to-face, dialog and the distracted conversations at dinner or lunch. There are behavioral disorders, too. Since 2010 there has been a Smartphone Addiction Rating Scale (SARS) and the Young Internet Addiction Scale (YIAS). Overuse of mobile phones has prompted dozens of studies into adolescents as well as adults, and there are links to increased levels of ADHD, and a variety of psychological disorders including stress and depression.

So, while we rely on our phones for all the cool things they enable us to do we are—in less than ten years—experiencing a host of unintended consequences. One of these is privacy. Whether Apple or another brand, the intricacies of smartphone technology are substantially the same. This video shows why your phone is so easy to hack, to activate your phone’s microphone, camera, access your contact list or track your location. And, with the right tools, it is frighteningly simple. What struck me most after watching the video was not how much we are at risk of being hacked, eavesdropped on, or perniciously viewed, but the comments from a woman on the street. She said, “I don’t have anything to hide.” It is not the first millennial that I have heard say this. And that is what, perhaps, bothers me most—our adaptability based on the slow incremental erosion of what used to be our private space.

We can’t rest responsibility entirely on the smartphone. We have to include the idea of social media going back to the days of (amusingly) MySpace. Sharing yourself with a group of close friends gradually gave way to the knowledge that the photo or info may also get passed along to complete strangers. It wasn’t, perhaps your original intention, but, oh well, it’s too late now. Maybe that’s when we decided that we had better get used to sharing our space, our photos (compromising or otherwise), our preferences, our adventures and misadventures with outsiders, even if they were creeps trolling for juicy tidbits. As we chalked up that seemingly benign modification of our behavior to adaptability, the first curtain fell. If someone is going to watch me, and there’s nothing I can do about it, then I may as well get used to it. We adjusted as a defense mechanism. Paranoia was the alternative, and no one wants to think of themselves as paranoid.

A few weeks ago, I posted an image of Mark Zuckerberg’s laptop with tape over the camera and microphone. Maybe he’s more concerned with privacy since his world is full of proprietary information. But, as we become more accustomed to being both constantly connected and potentially tracked or watched, when will the next curtain fall? If design is about planning, directing or focusing, then the absence of design would be ignoring, neglecting or turning away. I return to the first question in this post: Did we design this future? If not, what did we expect?

Bookmark and Share

Privacy or paranoia?

 

If you’ve been a follower of this blog for a while, then you know that I am something of a privacy wonk. I’ve written about it before (about a dozen times, such as), and I’ve even built a research project (that you can enact yourself) around it. A couple of things transpired this week to remind me that privacy is tenuous. (It could also be all the back episodes of Person of Interest that I’ve been watching lately or David Staley’s post last April about the Future of Privacy.) First, I received an email from a friend this week alerting me to a little presumption that my software is spying on me. I’m old enough to remember when you purchased software on as a set of CDs (or even disks). You loaded in on your computer, and it seemed to last for years before you needed to upgrade. Let’s face it, most of us use only a small subset of the features in our favorite applications. I remember using Photoshop 5 for quite awhile before upgrading and the same with the rest of what is now called the Adobe Creative Suite. I still use the primary features of Photoshop 5, Illustrator 10 and InDesign (ver. whatever), 90% of the time. In my opinion, the add-ons to those apps have just slowed things down, and of course, the expense has skyrocketed. Gone are the days when you could upgrade your software every couple of years. Now you have to subscribe at a clip of about $300 a year for the Adobe Creative Suite. Apparently, the old business model was not profitable enough. But then came the Adobe Creative Cloud. (Sound of an angelic chorus.) Now it takes my laptop about 8 minutes to load into the cloud and boot up my software. Plus, it stores stuff. I don’t need it to store stuff for me. I have backup drives and archive software to do that for me.

Back to the privacy discussion. My friend’s email alerted me to this little tidbit hidden inside the Creative Cloud Account Manager.

Learn elsewhere, please.
Learn elsewhere, please.

Under the Security and Privacy tab, there are a couple of options. The first is Desktop App Usage. Here, you can turn this on or off. If it’s on, one of the things it tracks is,

“Adobe feature usage information, such as menu options or buttons selected.”

That means it tracks your keystrokes. Possibly this only occurs when you are using that particular app, but uh-uh, no thanks. Switch that off. Next up is a more pernicious option; it’s called, Machine Learning. Hmm. We all know what that is and I’ver written about that before, too. Just do a search. Here, Adobe says,

“Adobe uses machine learning technologies, such as content analysis and pattern recognition, to improve our products and services. If you prefer that Adobe not analyze your files to improve our products and services, you can opt-out of machine learning at any time.”

Hey, Adobe, if you want to know how to improve your products and services, how about you ask me, or better yet, pay me to consult. A deeper dive into ‘machine learning’ tells me more. Here are a couple of quotes:

“Adobe uses machine learning technologies… For example, features such as Content-Aware Fill in Photoshop and facial recognition in Lightroom could be refined using machine learning.”

“For example, we may use pattern recognition on your photographs to identify all images of dogs and auto-tag them for you. If you select one of those photographs and indicate that it does not include a dog, we use that information to get better at identifying images of dogs.”

Facial recognition? Nope. Help me find dog pictures? Thanks, but I think I can find them myself.

I know how this works. The more data that the machine can feed on the better it becomes at learning. I would just rather Adobe get their data by searching it out for it themselves. I’m sure they’ll be okay. (Afterall there’s a few million people who never look at their account settings.) Also, keep in mind, it’s their machine not mine.

The last item on my privacy rant just validated my paranoia. I ran across this picture of Facebook billionaire Mark Zuckerberg hamming it up for his FB page.

zuckerberg copy

 

In the background, is his personal laptop. Upon closer inspection, we see that Zuck has a piece of duct tape covering his laptop cam and his dual microphones on the side. He knows.

zuckcloseup

 

Go get a piece of tape.

Bookmark and Share

Who are you?

 

There have been a few articles in the recent datasphere have centered around the pervasive tracking of our online activity from the benign to those bordering on unethical. One was from FastCompany that highlighted some practices that web marketers use to track the folks that visit their sites. The article by Steve Melendez lists a handful of these. They include the basics like first party cookies, and A/B testing, to more invasive methods such as psychological testing (thanks, Facebook) third-party tracking cookies, and differential pricing. The cookie is, of course, the most basic. I use them on this site and on The Lightstream Chronicles to see if anyone is visiting, where they’re coming from and a bunch of other minutiae. Using Google Analytics, I can, for example, see what city or country my readers are coming from, age and sex, whether they are regulars or new visitors, whether they visit via mobile or desktop, Apple or Windows, and if they came to my site by way of referral, where did they originate. Then I know if my ads for the graphic novel are working. I find this harmless. I have no interest in knowing your sexual preference, where you shop, and above all, I’m not selling anything (at least not yet). I’m just looking for more eyeballs. More viewers mean that I’m not wasting my time and that somebody is paying attention. It’s interesting that a couple of months ago the EU internet authorities sent me a snippet of code that I was “required” to post on the LSC site alerting my visitors that I use cookies. Aside from they U.S., my highest viewership is from the UK. It’s interesting that they are aware that their citizens are visiting. Hmm.

I have software that allows me to A/B test which means I could change up something on the graphic novel homepage and see if it gets more reaction than a previous version. But, I barely have the time to publish a new blog or episode much less create different versions and test them. A one-man-show has its limitations.

The rest of the tracking methods highlighted in the above article require a lot of devious programming. Since I have my hands full with the basics, this stuff is way above my pay grade. Even if it wasn’t, I think it all goes a bit too far.

Personally, I deplore most internet advertising. I know that makes me a hypocrite since I use it from time to time to drive traffic to my site. I also realize that it is probably a necessary evil. Sites need revenue, or they can’t pump out the content on which we have come to rely. Unfortunately, the landscape often turns into a melee. Tumblr is a good example. Initially, they integrated their ads into the format of their posts. So as you are scrolling through the content, you see an ad within their signature brand presentation. Cool. Then they started doing separate in-line ads. These looked entirely different from their brand content, and the ads were those annoying things like “Grandma discovers the fountain of youth.” Not cool. Then they introduced this floating ad box that tracks you all the way down the page as you scroll through content. You get no break from it. It’s distracting, and based on the content, it can be horrifying, like Hillary Clinton staring at you for seven minutes. How much can a person take?

And it won't go away.
And it won’t go away.

Since my blog is future oriented, the question arises, what does this have to do with the future? It does. These marketing techniques will only become more sophisticated. Many of them already incorporate artificial intelligence to map your activity and predict your every want and need—maybe even the ones you didn’t think anyone knew you had. Is this an invasion of privacy? If it is, it’s going to get more invasive. And as I’m fond of saying, we need to pay attention to these technologies and practices, now or we won’t have a say in where they end up. As a society, we have to do better than just adapt to whatever comes along. We need to help point them in the right direction from the beginning.

 

Bookmark and Share

Vision comes from looking to the future.

 

I was away last week, but I left off with a post about proving that some of the things that we current think of as sci-fi or fantasy are not only plausible, but some may even be on their way to reality. In the last post, I was providing the logical succession toward implantable technology or biohacking.

The latest is a robot toy from a company called Anki. Once again, WIRED provided the background on this product, and it is an excellent example of technological convergence which I have discussed many times before. Essentially, “technovergence” is when multiple cutting-edge technologies come together in unexpected and sometimes unpredictable ways. In this case, the toy brings together AI, machine learning, computer vision science, robotics, deep character development, facial recognition, and a few more. According to the video below,

“There have been very few applications where a robot has felt like a character that connects with humans around it. For that, you really need artificial intelligence and robotics. That’s been the missing key.”

According to David Pierce, with WIRED,

“Cozmo is a cheeky gamer; the little scamp tried to fake me into tapping my block when they didn’t match, and stormed off when I won. And it’s those little tics, the banging of its lift-like arm and spinning in circles and squawking in its Wall-E voice, that really makes you want to refer to the little guy as ‘he’ rather than ‘it.’”

What strikes me as especially interesting is that my students designed their own version of this last semester. (I’m pretty sure that they knew nothing about this particular toy.) The semester was a rigorous design fiction class that took a hard look at what was possible in the next five to ten years. For some, the class was something like hell, but the similarities and possibilities that my students put together for their robot are amazingly like Cozmo.

I think this is proof of more than what is possible; it’s evidence that vision comes from looking to the future.

Bookmark and Share

Future proof.

 

There is no such thing as future proof anything, of course, so I use the term to refer to evidence that a current idea is becoming more and more probable of something we will see in the future. The evidence I am talking about surfaced in a FastCo article this week about biohacking and the new frontier of digital implants. Biohacking has a loose definition and can reference using genetic material without regard to ethical procedures, to DIY biology, to pseudo-bioluminescent tattoos, to body modification for functional enhancement—see transhumanism. Last year, my students investigated this and determined that a society willing to accept internal implants was not a near-future scenario. Nevertheless, according to FastCo author Steven Melendez,

“a survey released by Visa last year that found that 25% of Australians are ‘at least slightly interested’ in paying for purchases through a chip implanted in their bodies.”

Melendez goes on to describe a wide variety of implants already in use for medical, artistic and personal efficiency and interviews Tim Shank, president of a futurist group called TwinCities+. Shank says,

“[For] people with Android phones, I can just tap their phone with my hand, right over the chip, and it will send that information to their phone..”

implants
Amal Graafstra’s Hands [Photo: courtesy of Amal Graafstra] c/o WIRED
The popularity of body piercings and tattoos— also once considered as invasive procedures—has skyrocketed. Implantable technology, especially as it becomes more functionally relevant could follow a similar curve.

I saw this coming some years ago when writing The Lightstream Chronicles. The story, as many of you know, takes place in the far future where implantable technology is mundane and part of everyday life. People regulate their body chemistry access the Lightstream (the evolved Internet) and make “calls” using their fingertips embedded with Luminous Implants. These future implants talk directly to implants in the brain, and other systemic body centers to make adjustments or provide information.

An ad for Luminous Implants, and the "tap" numbers for local attractions.
An ad for Luminous Implants, and the “tap” numbers for local attractions.
Bookmark and Share