Tag Archives: sci-fi

But nobody knows what better is.

South by Southwest, otherwise known as SXSW calls itself a film and music festival and interactive media conference. It’s held every spring in Austin, Texas. Other than maybe the Las Vegas Consumer Electronics Show or San Diego’s ComicCon, I can’t think of many conferences that generate as much buzz as SXSW. This year it is no different. I will have blog fodder for weeks. Though I can’t speak to the film or music side, I’m sure they were scintillating. Under the category of interactive, most of the buzz is about technology in general, as tech gurus and futurists are always in attendance along with celebs who align themselves to the future.

Once again at SXSW, Ray Kurzweil was on stage. In my blogs, Kurzweil is probably the one guy I quote the most throughout this blog. So here we go again. Two tech sites caught my eye they week, reporting on Kurzweil’s latest prediction that moves up the date of the Singularity from 2045 to 2029; that’s 12 years away. Since we are enmeshed in the world of exponentially accelerating technology, I have encouraged my students to start wrapping their heads around the idea of exponential growth. In our most recent project, it was a struggle just to embrace the idea of how in only seven years we could see transformational change. If Kurzweil is right about his latest prognostication, then 12 years could be a real stunner. In case you are visiting this blog for the first time, the Singularity to which Kurzweil refers is, acknowledged as the point at which computer intelligence exceeds that of human intelligence; it will know more, anticipate more, and analyze more than any human capability. Nick Bostrom calls it the last invention we will ever need to make. We’ve already seen this to some extent with IBM’s Watson beating the pants off a couple of Jeopardy masters and Google’s DeepMind handily beat a Go genius at a game that most thought to be too complex for a computer to handle. Some refer to this “computer” as a superintelligence, and warn that we better be designing the braking mechanism in tandem with the engine, or this smarter-than-us computer may outsmart us in unfortunate ways.

In an article in Scientific American, Northwestern University psychology professor Paul Weber says we are bombarded each day with about 2.5 exabytes of data and that the human brain can only store an estimated 2.5 petabytes (a million gigabytes). Of course, the bombardment will continue to increase. Another voice that emerges in this discussion is Rob High IBM’s vice president and chief technology officer. According to the futurism tech blog, High was part of a panel discussion at the American Institute of Aeronautics and Astronautics (AIAA) SciTech Conference 2017. High said,

“…we have a very desperate need for cognitive computing…The information being produced is far surpassing our ability to consume and make use of…”

On the surface, this seems like a compelling argument for faster, more pervasive computing. But since it is my mission to question otherwise compelling arguments, I want to ask whether we actually need to process 2.5 exabytes of information? It would appear that our existing technology has already turned on the firehose of data (Did we give it permission?) and now it’s up to us to find a way to drink from the firehose. To me, it sounds like we need a regulator, not a bigger gullet. I have observed that the traditional argument in favor of more, better, faster often comes wrapped in the package of help for humankind.

Rob High, again from the futurism article, says,

“‘If you’re a doctor and you’re trying to figure out the best way to treat your patient, you don’t have the time to go read the latest literature and apply that knowledge to that decision’ High explained. ‘In any scenario, we can’t possibly find and remember everything.’ This is all good news, according to High. We need AI systems that can assist us in what we do, particularly in processing all the information we are exposed to on a regular basis — data that’s bound to even grow exponentially in the next couple of years.’”

From another futurism article, Kurzweil uses a similar logic:

“We’re going to be able to meet the physical needs of all humans. We’re going to expand our minds and exemplify these artistic qualities that we value.”

The other rationale that almost always becomes coupled with expanding our minds is that we will be “better.” No one, however, defines what better is. You could be a better jerk. You could be a better rapist or terrorist or megalomaniac. What are we missing exactly, that we have to be smarter, or that Bach, or Mozart are suddenly inferior? Is our quality of life that impoverished? And for those who are impoverished, how does this help them? And what about making us smarter? Smarter at what?

But not all is lost. On a more positive note, futurism in a third article (they were busy this week), reports,

“The K&L Gates Endowment for Ethics and Computational Technologies seeks to introduce the thoughtful discussion on the use of AI in society. It is being established through funding worth $10 million from K&L Gates, one of the United States’ largest law firms, and the money will be used to hire new faculty chairs as well as support three new doctoral students.”

Though I’m not sure whether we can consider this a regulator, rather something to lessen the pain of swallowing.

Finally (for this week), back to Rob High,

“Smartphones are just the tip of the iceberg,” High said. “Human intelligence has its limitations and artificial intelligence is going to evolve in a lot of ways that won’t be similar to human intelligence. But, I think they will work best in the presence of humans.”

So, I’m more concerned with when artificial intelligence is not working at its best.

Bookmark and Share

Paying attention.

I want to make a Tshirt. On the front, it will say, “7 years is a long time.” On the back, it will say, “Pay attention!”

What am I talking about? I’ll start with some background. This semester, I am teaching a collaborative studio with designers from visual communications, interior design, and industrial design. Our topic is Humane Technologies, and we are examining the effects of an Augmented Reality (AR) system that could be ubiquitous in 7 years. The process began with an immersive scan of the available information and emerging advances in AR, VR, IoT, human augmentation (HA) and, of course, AI. In my opinion, these are a just a few of the most transformative technologies currently attracting the heaviest investment across the globe. And where the money goes there goes the most rapid advancement.

A conversation starter.

One of the biggest challenges for the collaborative studio class (myself included) is to think seven years out. Although we read Kurzweil’s Law of Accelerating Returns, our natural tendency is to think linearly, not exponentially. One of my favorite Kurzweil illustrations is this:

“Exponentials are quite seductive because they start out sub-linear. We sequenced one ten-thousandth of the human genome in 1990 and two ten-thousandths in 1991. Halfway through the genome project, 7 ½ years into it, we had sequenced 1 percent. People said, “This is a failure. Seven years, 1 percent. It’s going to take 700 years, just like we said.” Seven years later it was done, because 1 percent is only seven doublings from 100 percent — and it had been doubling every year. We don’t think in these exponential terms. And that exponential growth has continued since the end of the genome project. These technologies are now thousands of times more powerful than they were 13 years ago, when the genome project was completed.”1

So when I hear a policymaker, say, “We’re a long way from that,” I cringe. We’re not a long way away from that. The iPhone was introduced on June 29, 2007, not quite ten years ago. The ripple-effects from that little technological marvel are hard to catalog. With the smartphone, we have transformed everything from social and behavioral issues to privacy and safety. As my students examine the next possible phase of our thirst for the latest and greatest, AR (and it’s potential for smartphone-like ubiquity), I want them to ask the questions that relate to supporting systems, along with the social and ethical repercussions of these transformations. At the end of it all, I hope that they will walk away with an appreciation for paying attention to what we make and why. For example, why would we make a machine that would take away our job? Why would we build a superintelligence? More often than not, I fear the answer is because we can.

Our focus on the technologies mentioned above is just a start. There are more than these, and we shouldn’t forget things like precise genetic engineering techniques such as CRISPR/Cas9 Gene Editing, neuromorphic technologies such as microprocessors configured like brains, the digital genome that could be the key to disease eradication, machine learning, and robotics.

Though they may sound innocuous by themselves, they each have gigantic implications for disruptions to society. The wild card in all of these is how they converge with each other and the results that no one anticipated. One such mutation would be when autonomous weapons systems (AI + robotics + machine learning) converge with an aggregation of social media activity to predict, isolate and eliminate a viral uprising.

From recent articles and research by the Department of Defense, this is no longer theoretical; we are actively pursuing it. I’ll talk more about that next week. Until then, pay attention.

 

1. http://www.bizjournals.com/sanjose/news/2016/09/06/exclusivegoogle-singularity-visionary-ray.htm
Bookmark and Share

Now I know that Kurzweil is right.

 

In a previous blog entitled “Why Kurzweil is probably right,” I made this statement,

“Convergence is the way technology leaps forward. Supporting technologies enable formerly impossible things to become suddenly possible.”

That blog was talking about how we are developing AI systems at a rapid pace. I quoted a WIRED magazine article by David Pierce that was previewing consumer AIs already in the marketplace and some of the advancements on the way. Pierce said that a personal agent is,

“…only fully useful when it’s everywhere when it can get to know you in multiple contexts—learning your habits, your likes and dislikes, your routine and schedule. The way to get there is to have your AI colonize as many apps and devices as possible.”

Then, I made my usual cautionary comment about how such technologies will change us. And they will. So, if you follow this blog, you know that I throw cold water onto technological promises as a matter of course. I do this because I believe that someone has to.

Right now I’m preparing my collaborative design studio course. We’re going to be focusing on AR and VR, but since convergence is an undeniable influence on our techno-social future, we will have to keep AI, human augmentation, the Internet of Things, and a host of other emerging technologies on the desktop as well. In researching the background for this class, I read three articles from Peter Diamandis for the Singularity Hub website. I’ve written about Peter before, as well. He’s brilliant. He’s also a cheerleader for the Singularity. So that being said, these articles, one on the Internet of Everything (IoE/IoT), Artificial Intelligence (AI), and another on Augmented and Virtual Reality (AR/VR), are full of promises. Most of what we thought of as science fiction, even a couple of years ago are now happening with such speed that Diamandis and his cohorts believe they are imminent in only three years. And by that I mean commonplace.

If that isn’t enough for us to sit up and take notice, then I am reminded of an article from the Silicon Valley Business Journal, another interview with Ray Kurzweil. Kurzweil, of course, has pretty much convinced us all by now that the Law of Accelerating Returns is no longer hyperbole. If anyone thought that it was only hype, sheer observation should have brought them to their senses. In this article,
Kurzweil gives this excellent illustration of how exponential growth actually plays out—no longer as a theory but—as demonstrable practice.

“Exponentials are quite seductive because they start out sub-linear. We sequenced one ten-thousandth of the human genome in 1990 and two ten-thousandths in 1991. Halfway through the genome project, 7 ½ years into it, we had sequenced 1 percent. People said, “This is a failure. Seven years, 1 percent. It’s going to take 700 years, just like we said.” Seven years later it was done because 1 percent is only seven doublings from 100 percent — and it had been doubling every year. We don’t think in these exponential terms. And that exponential growth has continued since the end of the genome project. These technologies are now thousands of times more powerful than they were 13 years ago when the genome project was completed.”

When you combine that with the nearly exponential chaos of hundreds of other converging technologies, indeed the changes to our world and behavior are coming at us like a bullet-train. Ask any Indy car driver, when things are happening that fast, you have to be paying attention.
But when the input is like a firehose and the motivations are unknown, how on earth do we do that?

Personally, I see this as a calling for design thinkers worldwide. Those in the profession, schooled in the ways of design thinking have been espousing our essential worth to realm of wicked problems for some time now. Well, problems don’t get more wicked than this.

Maybe we can design an AI that could keep us from doing stupid things with technologies that we can make but cannot yet comprehend the impact of.

Bookmark and Share

Big-Data Algorithms. Don’t worry. Be happy.

 

It’s easier for us to let the data decide for us. At least that is the idea behind global digital design agency Huge. Aaron Shapiro is the CEO. He says, “The next big breakthrough in design and technology will be the creation of products, services, and experiences that eliminate the needless choices from our lives and make ones on our behalf, freeing us up for the ones we really care about: Anticipatory design.”

Buckminster Fuller wrote about Anticipatory Design Science, but this is not that. Trust me. Shapiro’s version is about allowing big data, by way of artificial intelligence and neural networks, to become so familiar with us and our preferences that it anticipates what we need to do next. In this vision, I don’t have to decide what to wear, or eat, or how to get to work, or when to buy groceries, or gasoline, what color trousers go with my shoes and also when it’s time to buy new shoes. No decisions will be necessary. Interestingly, Shapiro sees this as a good thing. The idea comes from a flurry of activity about something called decision fatigue. What is that? In a nutshell, it says that our decision-making capacity is a reservoir that gradually gets depleted the more decisions we make, possibly as a result of body chemistry. After a long string of decisions, according to the theory, we are more likely to make a bad decision or none at all. Things like willpower disintegrate along with our decision-making.

Among the many articles in the last few months on this topic was FastCompany, who wrote that,

“Anticipatory design is fundamentally different: decisions are made and executed on behalf of the user. The goal is not to help the user make a decision, but to create an ecosystem where a decision is never made—it happens automatically and without user input. The design goal becomes one where we eliminate as many steps as possible and find ways to use data, prior behaviors and business logic to have things happen automatically, or as close to automatic as we can get.”

Supposedly this frees “us up for the ones we really care about.”
My questions are, who decides which questions are important? And once we are freed from making decisions, will we even know that we have missed on that we really care about?

Google Now is a digital assistant that not only responds to a user’s requests and questions, but predicts wants and needs based on search history. Pulling flight information from emails, meeting times from calendars and providing recommendations of where to eat and what to do based on past preferences and current location, the user simply has to open the app for their information to compile.”

It’s easy to forget that AI as we currently know it goes under the name of Facebook or Google or Apple or Amazon. We tend to think of AI as some ghostly future figure or a bank of servers, or an autonomous robot. It reminds me a bit of my previous post about Nick Bostrom and the development of SuperIntelligence. Perhaps it is a bit like an episode of Person of Interest. As we think about designing systems that think for us and decide what is best for us, it might be a good idea to think about what it might be like to no longer think—as long as we still can.

 

 

Bookmark and Share

Yes, you too can be replaced.

Over the past weeks, I have begun to look at the design profession and design education in new ways. It is hard to argue with the idea that all design is future-based. Everything we design is destined for some point beyond now where the thing or space, the communication, or the service will exist. If it already existed, we wouldn’t need to design it. So design is all about the future. For most of the 20th century and the last 16 years, the lion’s share of our work as designers has focused primarily on very near-term, very narrow solutions: A better tool, a more efficient space, a more useful user interface or satisfying experience. In fact, the tighter the constraints, the narrower the problem statement and greater the opportunity to apply design thinking to resolve it in an elegant and hopefully aesthetically or emotionally pleasing way. Such challenges are especially gratifying for the seasoned professional as they have developed almost an intuitive eye toward framing these dilemmas from which novel and efficient solutions result. Hence, over the course of years or even decades, the designer amasses a sort of micro scale, big data assemblage of prior experiences that help him or her reframe problems and construct—alone or with a team—satisfactory methodologies and practices to solve them.

Coincidentally, this process of gaining experience is exactly the idea behind machine learning and artificial intelligence. But, since computers can amass knowledge from analyzing millions of experiences and judgments it is theoretically possible that an artificial intelligence could gain this “intuitive eye” to a degree far surpassing the capacity of an individual him-or-her designer.

That is the idea behind a brash (and annoyingly self-conscious) article from the American Institute of Graphic Arts (AIGA) entitled Automation Threatens To Make Graphic Designers Obsolete. Titles like this are a hook. Of course. Designers, deep down assume that they can never be replaced. They believe this because inherent to the core of artificial intelligence there is a lack of understanding, empathy or emotional verve, so far. We saw this earlier in 2016 when an AI chatbot went Nazi because a bunch of social media hooligans realized that Tay (the name of the Microsoft chatbot) was in learn mode. If you told “her” Nazi’s were cool, she believed you. It was proof, again, that junk in is junk out.

The AIGA author Rob Peart pointed to AutoDesk’s Dreamcatcher software that is capable of rapid prototyping surprisingly creative albeit roughly detailed prototypes. Peart features a quote from an Executive Creative Director for techno-ad-agency Sapient Nitro. “A designer’s role will evolve to that of directing, selecting, and fine tuning, rather than making. The craft will be in having vision and skill in selecting initial machine-made concepts and pushing them further, rather than making from scratch. Designers will become conductors, rather than musicians.”

I like the way we always position new technology in the best possible light. “You’re not going to lose your job. Your job is just going to change.” But tell that to the people who used to write commercial music, for example. The Internet has become a vast clearing house for every possible genre of music. It’s all available for a pittance of what it would have taken a musician to write, arrange and produce a custom piece of music. It’s called stock. There are stock photographs, stock logos, stock book templates, stock music, stock house plans, and the list goes on. All of these have caused a significant disruption to old methods of commerce, and some would say that these stock versions of everything lack the kind of polish and ingenuity that used to distinguish artistic endeavors. The artist’s who’s jobs they have obliterated refer to the work with a four-letter word.

Now, I confess I have used stock photography, and stock music, but I have also used a lot of custom photography and custom music as well. Still, I can’t imagine crossing the line to a stock logo or stock publication design. Perish the thought! Why? Because they look like four-letter-words; homogenized, templates, and the world does not need more blah. It’s likely that we also introduced these new forms of stock commerce in the best possible light, as great democratizing innovations that would enable everyone to afford music, or art or design. That anyone can make, create or borrow the things that professionals used to do.

As artificial intelligence becomes better at composing music, writing blogs and creating iterative designs (which it already does and will continue to improve), we should perhaps prepare for the day when we are no longer musicians or composers but rather simply listeners and watchers.

But let’s put that in the best possible light: Think of how much time we’ll have to think deep thoughts.

 

Bookmark and Share

Superintelligence. Is it the last invention we will ever need to make?

I believe it is crucial that we move beyond preparation to adapt or react to the future but to actively engage in shaping it.

An excellent example of this kind of thinking is Nick Bostrom’s TED talk from 2015.

Bostrom is concerned about the day when machine intelligence exceeds human intelligence (the guess is somewhere between twenty and thirty years from now). He points out that, “Once there is super-intelligence, the fate of humanity may depend on what the super-intelligence does. Think about it: Machine intelligence is the last invention that humanity will ever need to make. Machines will then be better at inventing [designing] than we are, and they’ll be doing so on digital timescales.”

His concern is legitimate. How do we control something that is smarter than we are? Anticipating AI will require more strenuous design thinking than that which produces the next viral game, app, or service. But these applications are where the lion’s share of the money is going. When it comes to keeping us from being at best irrelevant or at worst an impediment to AI, Bostrom is guardedly optimistic about how we can approach it. He thinks we could, “[…]create an A.I. that uses its intelligence to learn what we value, and its motivation system is constructed in such a way that it is motivated to pursue our values or to perform actions that it predicts we would approve of.”

At the crux of his argument and mine: “Here is the worry: Making superintelligent A.I. is a really hard challenge. Making superintelligent A.I. that is safe involves some additional challenge on top of that. The risk is that if somebody figures out how to crack the first challenge without also having cracked the additional challenge of ensuring perfect safety.”

Beyond machine learning (which has many facets), there are a wide-ranging set of technologies, from genetic engineering to drone surveillance, to next-generation robotics, and even VR, that could be racing forward without someone thinking about this “additional challenge.”

This could be an excellent opportunity for designers. But, to do that, we will have to broaden our scope to engage with science, engineering, and politics. More on that in future blogs.

Bookmark and Share

Surveillance. Are we defenseless?

Recent advancements in AI that are increasing exponentially (in areas such as facial recognition) demonstrate a level of sophistication in surveillance that renders most of us indefensible. There is a new transparency, and virtually every global citizen is a potential microbe for scrutiny beneath the microscope. I was blogging about this before I ever set eyes on the CBS drama Person of Interest, but the premise that surveillance could be ubiquitous is very real. The series depicts a mega, master computer that sees everything, but the idea of gathering a networked feed of the world’s cameras and a host of other accessible devices into a central data facility where AI sorts, analyzes and learns what kind of behavior is potentially threatening, is well within reach. It isn’t even a stretch that something like it already exists.

As with most technologies, however, they do not exist in a vacuum. Technologies converge. Take, for example, a recent article in WIRED about how accurate facial recognition is becoming even when the subject is pixelated or blurred. A common tactic to obscure the identity of video witness or an innocent bystander is to blur or to pixelate their face; a favored technique of Google Maps. Just go to any big city street view and Google has systematically obscured license plates and faces. Today these methods no longer compete against state-of-the-art facial recognition systems.

The next flag is the escalating sophistication of hacker technology. One of the most common methods is malware. Through an email or website, malware can infect a computer and raise havoc. Criminals often use it to ransom a victim’s computer before removing the infection. But not all hackers are criminals, per se. The FBI is pushing for the ability to use malware to digital wiretap or otherwise infiltrate potentially thousands of computers using only a single warrant. Ironically, FBI Director James Comey recently admitted that he puts tape over the camera on his personal laptop. I wrote about this a few weeks back What does that say about the security of our laptops and devices?

Is the potential for destructive attacks on our devices is so pervasive that the only defense we have is duct tape? We can track as far back as Edward Snowden, the idea that the NSA can listen in on your phone even when it’s off. And since 2014, experts have confirmed that the technology exists. In fact, albeit sketchy, some apps purport to do exactly that. You won’t find them in the app store (for obvious reasons), but there are websites where you can click the “buy” button. According to the site Stalkertools.com, which doesn’t pass the legit news site test, (note the use of awesome) one these apps promises that you can:

• Record all phone calls made and received, hear everything being said because you can record all calls and even listen to them at a later date.
• GPS Tracking, see on a map on your computer, the exact location of the phone
• See all sites that are opened on the phone’s web browser
• Read all the messages sent and received on IM apps like Skype, Whatsapp and all the rest
• See all the passwords and logins to sites that the person uses, this is thanks to the KeyLogger feature.
• Open and close apps with the awesome “remote control” feature
• Read all SMS messages and see all photos send and received on text messages
• See all photos taken with the phone’s camera

“How it work” “ The best monitoring for protect family” — Yeah. Sketchy.
“How it work” “ The best monitoring for protect family” — Sketchy, you think?

I visited one of these sites (above) and, frankly, I would never click a button on a website that can’t form a sentence in English, and I would not recommend that you do either. Earlier this year, the UK Independent published an article where Kelli Burns, a mass communication professor at the University of South Florida, alleged that Facebook regularly listens to users phone conversations to see what people are talking about. Of course, she said she can’t be certain of that.

Nevertheless, it’s out there, and if it has not already happened eventually, some organization or government will find a way to network the access points and begin collecting information across a comprehensive matrix of data points. And, it would seem that we will have to find new forms of duct tape to attempt to manage whatever privacy we have left. I found a site that gives some helpful advice for determining whether someone is tapping your phone.

Good luck.

 

Bookmark and Share

Invalid?

In a scene from the 2007 movie Gattaca, co-star Uma Thurman steals a follicle of hair from of love-interest Ethan Hawke and takes it to the local DNA sequencing booth (presumably they’re everywhere, like McDonald’s) to find out if Hawke’s DNA is worthy of her affections. She passes the follicle in a paper thin wrapper through a pass-through window as if she were buying a ticket for a movie. The attendant asks, “You want a full sequence?” Thurman confirms, and then waits anxiously. Meanwhile, others step up to windows to submit their samples. A woman who just kissed her boyfriend has her lips swabbed and assures the attendant that the sample is only a couple of minutes old. In about a minute, Thurman receives a plastic tube with the results rolled up inside. Behind the glass, a voice says, “Nine point three. Quite a catch!”

 

In the futuristic society depicted in the movie, humans are either “valid” or “invalid.” Though discrimination based on your genetic profile is illegal and referred to as “genoism,” it is widely known to be a distinguishing factor in employment, promotion, and finding the right soul-mate.

Enter the story of Illumina, which I discovered by way of a FastCompany article earlier this week. Illumina is a hardware/software company. One might imagine them as the folks who make the fictitious machines behind the DNA booths in a science fiction future. Except they are already making them now. The company, which few of us have ever heard of, has 5,000 employees and more than $2 billion in annual revenues. Illumina’s products are selling like hotcakes, in both the clinical and consumer spheres.

“Startups have already entered the clinical market with applications for everything from “liquid biopsy” tests to monitor late-stage cancers (an estimated $1 billion market by 2020, according to the business consulting firm Research and Markets), to non-invasive pregnancy screenings for genetic disorders like Down Syndrome ($2.4 billion by the end of 2022).”

According to FastCo,

“Illumina has captured more than 70% of the sequencing market with these machines that it sells to academics, pharmaceutical companies, biotech companies, and more.”

You and I can do this right now. Companies like Ancestry.com and 23andMe will work up a profile of your DNA from a little bit of saliva and sent through the mail. In a few weeks after submitting your sample, these companies will send you a plethora of reports on your carrier status (for passing on inherited conditions), ancestry reports that track your origins, wellness reports, such as your propensity to be fat or thin, and your traits like blue eyes or a unibrow. All of this costs about $200. Considering that sequencing DNA on this scale was a pipe dream ten years ago, it’s kind of a big deal. They don’t sequence everything; that requires one of Illumina’s more sophisticated machines and costs about $3,000.

If you put this technology in the context of my last post about exponential technological growth. Then it is easy to see that the price of machines, the speed of analysis, and the cost of a report is only going to come down, and faster than we think. At this point, everything will be arriving faster than we think. Here, if only to get your attention, I ring the bell. Illumina is investing in companies that bring this technology to your smartphone. With one company, Helix, “A customer might check how quickly they metabolize caffeine via an app developed by a nutrition company. Helix will sequence the customers’ genomic data and store it centrally, but the nutrition company delivers the report back to the user.” The team from Helix, “[…]that the number of people who have been sequenced will drastically increase […]that it will be 90% of people within 20 years.” (So, probably ten years is a better guess.)

According to the article, the frontier for genomics is expanding.

“What comes next is writing DNA, and not just reading it. Gene-editing tools like CRISPR-Cas9 are making it cheaper and faster to move genes around, which has untold consequences for changing the environment and treating disease.”

CRISPR can do a lot more than that.

But, as usual, all of these developments focus on the bright side, the side that saves lives and not the uncomfortable or unforeseen. There is the potential that you DNA will determine your insurance rates, or even if you get insurance. Toying around with these realms, it is not difficult to imagine that you can “Find anyone’s DNA,” like you can find anybody’s address or phone number. Maybe we see this feature incorporated into dating sites. You won’t have to steal a hair follicle from your date; it will already be online, and if they don’t publish it, certainly people will ask, “What do you have to hide?”

And then there’s the possibility that your offspring might inherit an unfavorable trait, like that unibrow or maybe Down Syndrome. So maybe those babies will never be born, or we’ll use CRISPER to make sure the nose is straight, the eyes are green, the skin is tan, and the IQ is way up there. CRISPER gene editing and splicing will be expensive, of course. Some will be able to afford it. The rest? Well, they’ll have to find a way to love their children flaws and all. So here are my questions? Will this make us more human or less human? Will our DNA become just another way to judge each other on how smart, or thin, or good looking, or talented? Is it just another way to distinguish between the haves and have-nots?

If the apps are already in design, Uma Thurman may not have long to wait.

 

Bookmark and Share

Transcendent Plan

 

One of my oft-quoted sources for future technology is Ray Kurzweil. A brilliant technologist, inventor, and futurist, Kurzweil seems to see it all very clearly, almost as though he were at the helm personally. Some of Kurzweil’s theses are crystal clear for me, such as an imminent approach toward the Singularity in a series of innocuous, ‘seemingly benign,’ steps. I also agree with his Law of Accelerating Returns1 which posits that technology advances exponentially. In a recent interview with the Silicon Valley Business Journal, he nicely illustrated that idea.

“Exponentials are quite seductive because they start out sub-linear. We sequenced one ten-thousandth of the human genome in 1990 and two ten-thousandths in 1991. Halfway through the genome project, 7 ½ years into it, we had sequenced 1 percent. People said, “This is a failure. Seven years, 1 percent. It’s going to take 700 years, just like we said.” Seven years later it was done, because 1 percent is only seven doublings from 100 percent — and it had been doubling every year. We don’t think in these exponential terms. And that exponential growth has continued since the end of the genome project. These technologies are now thousands of times more powerful than they were 13 years ago, when the genome project was completed.”

Kurzweil says the same kinds of leaps are approaching for solar power, resources, disease, and longevity. Our tendency to think linear instead of exponential means that we can deceive ourselves into believing that technologies that, ‘just aren’t there yet,’ are ‘a long way off.’ In reality, they may be right around the corner.

I’m not as solid in my affirmation of Kurzweil (and others) when it comes to some of his other predictions. Without reading too much between the lines, you can see that there is a philosophy that is helping to drive Kurzweil. Namely, he doesn’t want to die. Of course, who does? But his is a quest to deny death on a techno-transcendental level. Christianity holds that eternal life awaits the believer in Jesus Christ, other religions are satisfied that our atoms return to the greater cosmos, or that reincarnation is the next step. It would appear that Kurzweil has no time for faith. His bet on science and technology. He states,

“I think we’re very much on track to have human-level AI by 2029, which has been my consistent prediction for 20 years, and then to be able to send nanobots into the brain in the 2030s and connect our biological neocortex to synthetic neocortex in the cloud.”

In the article mentioned above, Kurzweil states that his quest to live forever is not just about the 200-plus supplements that he takes daily. He refers to this as “Bridge One.” Bridge One buys us time until technology catches up. Then “Bridge Two,” the “biotechnology revolution” takes over and radically extends our life. If all else fails, our mind will be uploaded to Cloud (which will have evolved to a synthetic neocortex), though it remains to be seen whether the sum-total of a mind also equals consciousness in some form.

For many who struggle with the idea of death, religious or not, I wonder if when we dissect it, it is not the fear of physical decrepitude that scares us, but the loss of consciousness; that unique ability of humans to comprehend their world, share language and emotions, to create and contemplate?

I would pose that it is indeed that consciousness that makes us human (along with the injustice at the thought that we feel that we might lose it. It would seem that transcendence is in order. In one scenario this transcendence comes from God, in another ‘we are as Gods.’2

So finally, I wonder whether all of these small, exponentially replicating innovations—culminating to the point where we are accessing Cloud-data only by thinking, or communicating via telepathy, or writing symphonies for eternity—will make us more or less human. If we decide that we are no happier, no more content or fulfilled, is there any going back?

Seeing as it might be right around the corner, we might want to think about these things now rather than later.

 

1. Kurzweil, R. (2001) The Law of Accelerating Returns, KurzweilAI . Kurzweil AI. Available at: http://www.kurzweilai.net/the-law-of-accelerating-returns (Accessed: October 10, 2015). 
2. Brand, Stewart. “WE ARE AS GODS.” The Whole Earth Catalog, September 1968, 1-58. Accessed May 04, 2015. http://www.wholeearth.com/issue/1010/article/195/we.are.as.gods.
Bookmark and Share

Did we design this future?

 

I have held this discussion before, but a recent video from FastCompany reinvigorates a provocative aspect concerning our design future, and begs the question: Is it a future we’ve designed? It centers around the smartphone. There are a lot of cool things about our smartphones like convenience, access, connectivity, and entertainment, just to name a few. It’s hard to believe that Steve Jobs introduced the very first iPhone just nine years ago on June 29, 2007. It was an amazing device, and it’s no shocker that it took off like wildfire. According to stats site Statista, “For 2016, the number of smartphone users is forecast to reach 2.08 billion.” Indeed, we can say, they are everywhere. In the world of design futures, the smartphone becomes Exhibit A of how an evolutionary design change can spawn a complex system.

Most notably, there are the billions of apps that are available to users that promise a better way to calculate tips, listen to music, sleep, drive, search, exercise, meditate, or create. Hence, there is a gigantic network of people who make their living supplying user services. These are direct benefits to society and commerce. No doubt, our devices have also often saved us countless hours of analog work, enabled us to manage our arrivals and departures, and keep in contact (however tenuous) with our friends and acquaintances. Smartphones have helped us find people in distress and help us locate persons with evil intent. But, there are also unintended consequences, like legislation to keep us from texting and driving because these actions have also taken lives. There are issues with dependency and links to sleep disorders. Some lament the deterioration of human, one-on-one, face-to-face, dialog and the distracted conversations at dinner or lunch. There are behavioral disorders, too. Since 2010 there has been a Smartphone Addiction Rating Scale (SARS) and the Young Internet Addiction Scale (YIAS). Overuse of mobile phones has prompted dozens of studies into adolescents as well as adults, and there are links to increased levels of ADHD, and a variety of psychological disorders including stress and depression.

So, while we rely on our phones for all the cool things they enable us to do we are—in less than ten years—experiencing a host of unintended consequences. One of these is privacy. Whether Apple or another brand, the intricacies of smartphone technology are substantially the same. This video shows why your phone is so easy to hack, to activate your phone’s microphone, camera, access your contact list or track your location. And, with the right tools, it is frighteningly simple. What struck me most after watching the video was not how much we are at risk of being hacked, eavesdropped on, or perniciously viewed, but the comments from a woman on the street. She said, “I don’t have anything to hide.” It is not the first millennial that I have heard say this. And that is what, perhaps, bothers me most—our adaptability based on the slow incremental erosion of what used to be our private space.

We can’t rest responsibility entirely on the smartphone. We have to include the idea of social media going back to the days of (amusingly) MySpace. Sharing yourself with a group of close friends gradually gave way to the knowledge that the photo or info may also get passed along to complete strangers. It wasn’t, perhaps your original intention, but, oh well, it’s too late now. Maybe that’s when we decided that we had better get used to sharing our space, our photos (compromising or otherwise), our preferences, our adventures and misadventures with outsiders, even if they were creeps trolling for juicy tidbits. As we chalked up that seemingly benign modification of our behavior to adaptability, the first curtain fell. If someone is going to watch me, and there’s nothing I can do about it, then I may as well get used to it. We adjusted as a defense mechanism. Paranoia was the alternative, and no one wants to think of themselves as paranoid.

A few weeks ago, I posted an image of Mark Zuckerberg’s laptop with tape over the camera and microphone. Maybe he’s more concerned with privacy since his world is full of proprietary information. But, as we become more accustomed to being both constantly connected and potentially tracked or watched, when will the next curtain fall? If design is about planning, directing or focusing, then the absence of design would be ignoring, neglecting or turning away. I return to the first question in this post: Did we design this future? If not, what did we expect?

Bookmark and Share