What did one AI say to the other AI?

 

I know what you want.

A design foundations student recently asked my advice on a writing assignment, something that might be affected by or affect design in the future. I told him to look up predictive algorithms. I have long contended that logic alone indicates that predictive algorithms, taking existing data and applying constraints, can be used to solve a problem, answer a question, or design something. With the advent of big data, the information going in only amplifies the veracity of the recommendations coming out. In case you haven’t noticed, big data is, well, big.

One of the design practitioners that I follow is Amy Webb. Amy has been thinking about this longer than I have but clearly, we think alike, and we are looking at the same things. I don’t know if she is as alarmed as I am. We’ve never spoken. In her recent newsletter, her focus was on what else, predictive algorithms. Amy alerted me to a whole trove of new developments. There were so many that I have decided to make it a series of blogs starting with this one.

Keep in mind, that as I write this these technologies are in their infancy. If the already impress you, then the future will likely blow you away. The first was something known as, Project Dreamcatcher from Autodesk. These are the people who make Maya, and AutoCAD and much of the software that designers, animators, engineers and architects use every day. According to the website:

“The Dreamcatcher system allows designers to input specific design objectives, including functional requirements, material type, manufacturing method, performance criteria, and cost restrictions. Loaded with design requirements, the system then searches a procedurally synthesized design space to evaluate a vast number of generated designs for satisfying the design requirements. The resulting design alternatives are then presented back to the user, along with the performance data of each solution, in the context of the entire design solution space.”

Another on Amy’s list was Google’s recently announced RankBrain, Google’s next venture into context-aware platforms using advances in predictive algorithms to make what you see scarily tailored to who you are. According to Amy from a 2012 article (this is old news folks):

“With the adoption of the Siri application, iOS 5 mobile phones (Apple only) can now compare location, interests, intentions, schedule, friends, history, likes, dislikes and more to serve content and answers to questions.”

In other words, there’s lots more going on than you think when Siri answers a question for you. Well RankBrain takes this to the next level, according to Bloomberg who broke the story on RankBrain:

“For the past few months, a “very large fraction” of the millions of queries a second that people type into the company’s search engine have been interpreted by an artificial intelligence system, nicknamed RankBrain…’Machine learning is a core transformative way by which we are rethinking everything we are doing,’ said Google’s Chief Executive Officer Sundar Pichai on the company’s earnings call last week.”

By the way, so far, most AI predicts much more accurately than we do, humans that is.

If this is moving too fast for you, next week, thanks to Amy, I’ll highlight some applications of AI that will have you squirming.

PS— if you wan to follow Amy Webb go here.

Bookmark and Share

Take your medicine.

 

One of the realities that the characters in The Lightstream Chronicles have come to accept as a mundane part of everyday life is the matter of ubiquitous surveillance. The mesh, as it is known, enables visual monitoring of every citizen during every moment of the day, home, work, recreation, etc. What we would consider our most private moments are no longer truly private. AI monitors the digital mesh impressions for irregularities, threats, suspicious behavior, and danger. Through chipset identification (everyone has a chipset), persons, synthetics, even animals are identified, and GPS located so that the AI knows at whom it is looking. Then decades of big data analyze gestures, behaviors, voice patterns, and permissions (who is allowed to be where) to determine that all is well, or not so well.

If things are not well, a voice will chime in from one of your active surfaces to alert you to sudden danger or tell you to remain where you are until authorities or help arrive.

After many years of this level of surveillance, the citizenry has become used to the idea. The public’s initial reticence was assuaged by the idea that humans are not watching. Over the years, for most, it has become commonplace to the extent that few ever think about it, some find it reassuring, and others find it titillating. The AI knows when you wake up, go to sleep, have sex, bathe, eat, and everything else that is part of your day. As long as you and those around you obey the laws the AI doesn’t intervene.

Personal assistants are a different story. Personal assistants that take the form of synthetic humans, animals, or merely a disembodied voice, can speak audibly or telepathically and serve a different function. This kind of AI knows everything about you at a personal level, personal motivations, hopes and fears, aspirations, daily health and disposition. These AI are helpers, as opposed to observers. In the event of danger or a legal breach, those observing take precedence.

Of course this is fiction, but it is also design fiction, so whatever future I have incorporated into the story, it is my task to provide plausible connection to some existing technology today. There are thousands of examples of ubiquitous surveillance, and they are increasing daily. The personal assistant concept is well on its way. But a news article this week in The Verge about a company called Proteus Digital Health brings the idea of the chipset even closer to reality.

According to Proteus,

“Our products and services provide patients with meaningful health information to help manage their condition and provide physicians unprecedented insight into patient medication-taking and daily health habits.”

Call this the digital pill, the medical version of the insurance tracking device that monitor one’s driving habits. Drive safely and you get lower insurance rates. With the Proteus digital pill system, it’s not that easy to opt out. According to the article,

“Buried inside the pill is a sand-sized grain, one-millimeter square and a third of a millimeter thick, made from copper, magnesium, and silicon. When the pill reaches your stomach, your stomach acids form a circuit…The signal travels as far as a patch stuck to your skin near the navel, which verifies the signal, then transmits it wirelessly to your smartphone, which passes it along to your doctor. There’s now a verifiable record that the pill reached your stomach.”

Clever. Except when it becomes mandatory that you take whatever your doctor prescribes, or your insurance rate goes up. Hmm. I think this is a bad idea, but somehow I sense the insurance companies will love it, and big Pharma even more. A few years ago I was prescribed statins for cholesterol. Today there is a great deal of information available on how bad statins can be for you, and how cholesterol is not the villain that they once thought it was. Fortunately, I got wise to the potential adverse effects of statins and “took myself off of them”. Using a combination of supplements and changes in diet, I brought my cholesterol to healthy levels without drugs. But what if taking myself off of the drug was not an option, at least not an option without an expensive penalty? How long before this method is applied to pain-killers, vaccinations, or food and drink choices?

“Good morning Mr. Smith, this is the government. We noticed that you did not medicate yourself this morning. By not taking your medication, big data shows that you have a 62% chance of becoming a burden to the state. This is your first reminder…”

What do you think?

Bookmark and Share

We are as gods…

 

“We are as gods and might as well get good at it.”

— Stewart Brand (1968) Whole Earth Catalog

Once again it has been a busy week for future news, so it becomes difficult to know what to write about. There was an interesting OpEd piece in The New York Times about how recreating the human brain will be all but impossible this century and maybe next. That would be good news for The Lightstream Chronicles where the year is 2159 and artificial intelligence is humming along smoothly, brain cloning or not.

A couple of other web articles caught my attention both emanated from the site, Motherboard. They frequently write about the world of Transhumanism. This week there was an interesting article on “the Father of Modern Transhumanism” Fereidoun M. Esfandiary, who changed his name to FM-2030, FM was a futurist, a UN diplomat, and writer. In 1970, he began writing about utopian futures that broke free of all the “isms” that were holding back humanity. The author, Ry Marcattilio-McCracken, writes:

“For FM-2030, the ideological left and right were dinosaurs, remnants of an industrial age that had ushered in the modern world but was being quickly replaced and whose vestiges were only holding humanity back. He proposed a new political schema populated by two groups: UpWingers and DownWingers. The former looks to the sky, and into the future. The latter down into the earth, and into the past. UpWingers see a future for humanity beyond this planet. DownWingers seek to preserve it.

FM-2030 was an unabashed UpWinger.”

FM imagined some prescient future scenarios such as open-source genetic blueprints, the disappearance of the nuclear family and cites replaced with something he called mobile, as well as the end of aging and disease and anything resembling our current form of politics. The article continues:

“Science and technology serve as the engine behind FM-2030’s “true democracy,” one he believed could chaperone humanity beyond its oppressive, hierarchical, and tribal origins and into the future it deserves.”

FM died at 69 of cancer and was frozen, “but his political ideas have lived on.” The up-winger, down-winger terminology has been replaced by Proactionaries and Precautionaries by philosophers and sociologists.

“Proactionaries (UpWingers) argue, most simply, that the risk inherent in any technological or policy venture is unavoidable and, further, often offset by the rewards of progress. They aver that the universe is a fundamentally perilous place.”

Precautionaries are a bit more egalitarian and more risk averse. I would call them the voice of reason, but then I’m probably one of them.

The article sums up noting how far the transhumanist movement has come, citing the advent of the Transhumanist Party and its current Presidential candidate Zoltan Istvan. Istvan also writes occasionally for Motherboard.

So I jumped to an Istvan article on the same site, Why I Advocate for Becoming a Machine. Istvan begins with this simple statement: “Transhumanists want to use technology and science to become more than human.” He explains that our present feeble makeup cannot see 99% of the light spectrum, we can’t hear like bats, nor energy patterns, nor vibrations from the Earth’s core. To Istvan, that we owe ourselves and humanity a rebuild.

“The reality is that many transhumanists want to change themselves dramatically. They want to replace limbs with mechanical endoskeleton parts so they can throw a football further than a mile. They want to bench press over a ton of weight. They want their metal fingertips to know the exact temperature of their coffee. In fact, they even want to warm or cool down their coffee with a finger tip, which will likely have a heating and cooling function embedded in it…Biology is simply not the best system out there for our species’ evolution. It’s frail, terminal, and needs to be upgraded.”

Istvan makes one a good argument: that we are confused about the point at which we become no longer human. For example, he notes that if we had all of our internal organs replaced many of us would probably find that acceptable. If, however, we replaced our arms and/or legs with super-charged titanium versions most would think we are crossing the line.

Here are some of my questions: If, within our current capabilities, we are unable to find happiness and fulfillment, then why should we expect to find it when we can throw a football a mile, or bench press a ton? What is to keep it from becoming two miles or two tons? Will we be happier or safer when the next Hitler can run faster, jump higher or live forever? Is the human propensity to foul things up going to go suddenly away when we can live forever?

One could argue that from the beginning of time, (literally) we have been obsessed with becoming God. As god-like as we have become over the centuries, it appears that we are still no closer to knowing what it means to be human or finding meaning in our lives. It seems that might be something we should get a grip on before moving on to the divine.

Bookmark and Share

The big questions.

 

I was delighted this week to discover Michael Sandel Professor of Philosophy and Government at Harvard’s Law School, where he teaches a wildly popular course called “Justice”. In this course, Sandel asks the big questions: Is it right to take from the rich and give to the poor? Is it right to legislate personal safety? Can torture ever be justified? He also asks questions of the digital age These are the issues with which I wrestle. A recent article in FastCompany, highlighted some of these: “Should we try to live forever? Buy our way to the head of the line? Create perfect children?” In a recent blog, I asked a similar question: Is it a human right to have everything that you want?

Sandel’s questions are about ethics and making his students think about the tough questions that we confront every day and the tough issues that are looming in the future. Some of these are accelerating toward us at an alarming pace. Privacy, artificial intelligence, and biotechnology are also on his list.

To the students at Harvard, Sandel is probably a celebrity. Sandel has a long list of credentials, including TED talks, and a best-selling book, Justice: What’s the Right Thing to Do. Nevertheless, I just discovered him this week, and I’m particularly pleased. Sandel is raising the kinds of questions that I try to do through my design fiction. His class is provoking the same discussion and debate that I work toward not only with design students but also the public at large.

Most often we do not like to grapple with these questions. It is one of the challenges of The Lightstream Chronicles. An explicit goal of my story to get people to think about the future, but thinking is optional. We have the option view the story as entertainment, purely for its story value without considering the underlying themes. It is one of the reasons that I have begun to pursue additional, more “guerrilla” oriented design fictions.

Back to the FastCo article, Sandel agrees that these discussions happen too infrequently. He gives a couple of reasons (sorry for the long quote, but he’s dead-on).

“There are two obstacles to having these conversations. One is that we have very few public venues and occasions for serious discussion of these questions… It’s very hard to have the kind of reasoned discussion of these big ethical questions without creating opportunities to do that.

The second obstacle is that we have a tendency in our public life to shy away from hard, controversial moral questions…We have a fear of moral judgment and moral argument because we know we live in pluralist societies where people disagree about values and ethics. There’s a tendency to believe that our public life could be neutral on those questions.

But I think that’s a mistaken impulse.”

Sandel goes on to suggest that the public has a “great hunger” for these philosophical, moral and ethical questions. I agree. Through my work and research, I hope to provoke some of these discussions and perhaps the public venues and occasions to hold them.

Bookmark and Share

Behind the scenes, The Lightstream Chronicles Episode 136

Episode 136

Clearly,  the Techman is out cold, probably has a whopping headache and some tingling extremities. No problem. Keiji-T has equipment for this. Rubbing his fingertips together, Keiji-T can emit a chemical odor akin to our current day smelling salts. Aromatherapy from the fingertips, however, is a standard feature built into most synths. As we saw back in season 3, Keiji was bragging about the various scents he could conjure up.

In 2159, pheromone implants are also a common human augmentation. A quick trip to the infusion store and you can pick up a nano-endocrine emitter (NEET) that you apply to the skin and it absorbs through the pores. The emitter synchs with your master chipset and can generate or regulate certain hormonal activity.  The most popular varieties are either axillary steroids or aliphatic acids that act as a potent attraction to the opposite sex or as enhancements to intimacy. There are many other options available including repellent scents, stimulants, and relaxants. They are also an optional feature for the enormously popular fingertip implants (luminous implants) that nearly everyone has. This option, however, is not available on earlier fingertip models like one’s that Techman uses.

You can read more about a host of 2159 technologies and augmentations by visiting the glossary part 1 or part 2.

Bookmark and Share