Tag Archives: happiness

An AI as President?


Back on May 19th, before I went on holiday, I promised to comment on an article that appeared that week advocating that we would better off with artificial intelligence (AI) as President of the United States. Joshua Davis authored the piece: Hear me out: Let’s Elect
An AI As President, for the business section of WIRED  online. Let’s start out with a few quotes.

“An artificially intelligent president could be trained to
maximize happiness for the most people without infringing on civil liberties.”

“Within a decade, tens of thousands of people will entrust their daily commute—and their safety—to an algorithm, and they’ll do it happily…The increase in human productivity and happiness will be enormous.”

Let’s start with the word happiness. What is that anyway? I’ve seen it around in several discourses about the future, that somehow we have to start focusing on human happiness above all things, but what makes me happy and what makes you happy may very well be different things. Then there is the frightening idea that it is the job of government to make us happy! There are a lot of folks out there that the government should give us a guaranteed income, pay for our healthcare, and now, apparently, it should also make us happy. If you haven’t noticed from my previous blogs, I am not a progressive. If you believe that government should undertake the happy challenge, you had better hope that their idea of happiness coincides with your own. Gerd Leonhard, a futurist whose work I respect, says that there are two types of happiness: first is hedonic (pleasure) which tends to be temporary, and the other is a eudaimonic happiness which he defines as human flourishing.1 I prefer the latter as it is likely to be more meaningful. Meaning is rather crucial to well-being and purpose in life. I believe that we should be responsible for our happiness. God help us if we leave it up to a machine.

This brings me to my next issue with this insane idea. Davis suggests that by simply not driving, there will be an enormous increase in human productivity and happiness. According to the website overflow data,

“Of the 139,786,639 working individuals in the US, 7,000,722, or about 5.01%, use public transit to get to work according to the 2013 American Communities Survey.”

Are those 7 million working individuals who don’t drive happier and more productive? The survey should have asked, but I’m betting the answer is no. Davis also assumes that everyone will be able to afford an autonomous vehicle. Maybe providing every American with an autonomous vehicle is also the job of the government.

Where I agree with Davis is that we will probably abdicate our daily commute to an algorithm and do it happily. Maybe this is the most disturbing part of his argument. As I am fond of saying, we are sponges for technology, and we often adopt new technology without so much as a thought toward the broader ramifications of what it means to our humanity.

There are sober people out there advocating that we must start to abdicate our decision-making to algorithms because we have too many decisions to make. They are concerned that the current state of affairs is simply too painful for humankind. If you dig into the rationale that these experts are using, many of them are motivated by commerce. Already Google and Facebook and the algorithms of a dozen different apps are telling you what you should buy, where you should eat, who you should “friend” and, in some cases, what you should think. They give you news (real or fake), and they tell you this is what will make you happy. Is it working? Agendas are everywhere, but very few of them have you in the center.

As part of his rationale, Davis cites the proven ability for AI to beat the world’s Go champions over and over and over again, and that it can find melanomas better than board-certified dermatologists.

“It won’t be long before an AI is sophisticated enough to
implement a core set of beliefs in ways that reflect changes in the world. In other words, the time is coming when AIs will have better judgment than most politicians.”

That seems like grounds to elect one as President, right? In fact, it is just another way for us to take our eye off the ball, to subordinate our autonomy to more powerful forces in the belief that technology will save us and make us happier.

Back to my previous point, that’s what is so frightening. It is precisely the kind of argument that people buy into. What if the new AI President decides that we will all be happier if we’re sedated, and then using executive powers makes it law? Forget checks and balances, since who else in government could win an argument against an all-knowing AI? How much power will the new AI President give to other algorithms, bots, and machines?

If we are willing to give up the process of purposeful work to make a living wage in exchange for a guaranteed income, to subordinate our decision-making to have “less to think about,” to abandon reality for a “good enough” simulation, and believe that this new AI will be free of the special interests who think they control it, then get ready for the future.

1. Leonhard, Gerd. Technology vs. Humanity: The Coming Clash between Man and Machine. p112, United Kingdom: Fast Future, 2016. Print.

Bookmark and Share

Transcendent Plan


One of my oft-quoted sources for future technology is Ray Kurzweil. A brilliant technologist, inventor, and futurist, Kurzweil seems to see it all very clearly, almost as though he were at the helm personally. Some of Kurzweil’s theses are crystal clear for me, such as an imminent approach toward the Singularity in a series of innocuous, ‘seemingly benign,’ steps. I also agree with his Law of Accelerating Returns1 which posits that technology advances exponentially. In a recent interview with the Silicon Valley Business Journal, he nicely illustrated that idea.

“Exponentials are quite seductive because they start out sub-linear. We sequenced one ten-thousandth of the human genome in 1990 and two ten-thousandths in 1991. Halfway through the genome project, 7 ½ years into it, we had sequenced 1 percent. People said, “This is a failure. Seven years, 1 percent. It’s going to take 700 years, just like we said.” Seven years later it was done, because 1 percent is only seven doublings from 100 percent — and it had been doubling every year. We don’t think in these exponential terms. And that exponential growth has continued since the end of the genome project. These technologies are now thousands of times more powerful than they were 13 years ago, when the genome project was completed.”

Kurzweil says the same kinds of leaps are approaching for solar power, resources, disease, and longevity. Our tendency to think linear instead of exponential means that we can deceive ourselves into believing that technologies that, ‘just aren’t there yet,’ are ‘a long way off.’ In reality, they may be right around the corner.

I’m not as solid in my affirmation of Kurzweil (and others) when it comes to some of his other predictions. Without reading too much between the lines, you can see that there is a philosophy that is helping to drive Kurzweil. Namely, he doesn’t want to die. Of course, who does? But his is a quest to deny death on a techno-transcendental level. Christianity holds that eternal life awaits the believer in Jesus Christ, other religions are satisfied that our atoms return to the greater cosmos, or that reincarnation is the next step. It would appear that Kurzweil has no time for faith. His bet on science and technology. He states,

“I think we’re very much on track to have human-level AI by 2029, which has been my consistent prediction for 20 years, and then to be able to send nanobots into the brain in the 2030s and connect our biological neocortex to synthetic neocortex in the cloud.”

In the article mentioned above, Kurzweil states that his quest to live forever is not just about the 200-plus supplements that he takes daily. He refers to this as “Bridge One.” Bridge One buys us time until technology catches up. Then “Bridge Two,” the “biotechnology revolution” takes over and radically extends our life. If all else fails, our mind will be uploaded to Cloud (which will have evolved to a synthetic neocortex), though it remains to be seen whether the sum-total of a mind also equals consciousness in some form.

For many who struggle with the idea of death, religious or not, I wonder if when we dissect it, it is not the fear of physical decrepitude that scares us, but the loss of consciousness; that unique ability of humans to comprehend their world, share language and emotions, to create and contemplate?

I would pose that it is indeed that consciousness that makes us human (along with the injustice at the thought that we feel that we might lose it. It would seem that transcendence is in order. In one scenario this transcendence comes from God, in another ‘we are as Gods.’2

So finally, I wonder whether all of these small, exponentially replicating innovations—culminating to the point where we are accessing Cloud-data only by thinking, or communicating via telepathy, or writing symphonies for eternity—will make us more or less human. If we decide that we are no happier, no more content or fulfilled, is there any going back?

Seeing as it might be right around the corner, we might want to think about these things now rather than later.


1. Kurzweil, R. (2001) The Law of Accelerating Returns, KurzweilAI . Kurzweil AI. Available at: http://www.kurzweilai.net/the-law-of-accelerating-returns (Accessed: October 10, 2015). 
2. Brand, Stewart. “WE ARE AS GODS.” The Whole Earth Catalog, September 1968, 1-58. Accessed May 04, 2015. http://www.wholeearth.com/issue/1010/article/195/we.are.as.gods.
Bookmark and Share

We are as gods…


“We are as gods and might as well get good at it.”

— Stewart Brand (1968) Whole Earth Catalog

Once again it has been a busy week for future news, so it becomes difficult to know what to write about. There was an interesting OpEd piece in The New York Times about how recreating the human brain will be all but impossible this century and maybe next. That would be good news for The Lightstream Chronicles where the year is 2159 and artificial intelligence is humming along smoothly, brain cloning or not.

A couple of other web articles caught my attention both emanated from the site, Motherboard. They frequently write about the world of Transhumanism. This week there was an interesting article on “the Father of Modern Transhumanism” Fereidoun M. Esfandiary, who changed his name to FM-2030, FM was a futurist, a UN diplomat, and writer. In 1970, he began writing about utopian futures that broke free of all the “isms” that were holding back humanity. The author, Ry Marcattilio-McCracken, writes:

“For FM-2030, the ideological left and right were dinosaurs, remnants of an industrial age that had ushered in the modern world but was being quickly replaced and whose vestiges were only holding humanity back. He proposed a new political schema populated by two groups: UpWingers and DownWingers. The former looks to the sky, and into the future. The latter down into the earth, and into the past. UpWingers see a future for humanity beyond this planet. DownWingers seek to preserve it.

FM-2030 was an unabashed UpWinger.”

FM imagined some prescient future scenarios such as open-source genetic blueprints, the disappearance of the nuclear family and cites replaced with something he called mobile, as well as the end of aging and disease and anything resembling our current form of politics. The article continues:

“Science and technology serve as the engine behind FM-2030’s “true democracy,” one he believed could chaperone humanity beyond its oppressive, hierarchical, and tribal origins and into the future it deserves.”

FM died at 69 of cancer and was frozen, “but his political ideas have lived on.” The up-winger, down-winger terminology has been replaced by Proactionaries and Precautionaries by philosophers and sociologists.

“Proactionaries (UpWingers) argue, most simply, that the risk inherent in any technological or policy venture is unavoidable and, further, often offset by the rewards of progress. They aver that the universe is a fundamentally perilous place.”

Precautionaries are a bit more egalitarian and more risk averse. I would call them the voice of reason, but then I’m probably one of them.

The article sums up noting how far the transhumanist movement has come, citing the advent of the Transhumanist Party and its current Presidential candidate Zoltan Istvan. Istvan also writes occasionally for Motherboard.

So I jumped to an Istvan article on the same site, Why I Advocate for Becoming a Machine. Istvan begins with this simple statement: “Transhumanists want to use technology and science to become more than human.” He explains that our present feeble makeup cannot see 99% of the light spectrum, we can’t hear like bats, nor energy patterns, nor vibrations from the Earth’s core. To Istvan, that we owe ourselves and humanity a rebuild.

“The reality is that many transhumanists want to change themselves dramatically. They want to replace limbs with mechanical endoskeleton parts so they can throw a football further than a mile. They want to bench press over a ton of weight. They want their metal fingertips to know the exact temperature of their coffee. In fact, they even want to warm or cool down their coffee with a finger tip, which will likely have a heating and cooling function embedded in it…Biology is simply not the best system out there for our species’ evolution. It’s frail, terminal, and needs to be upgraded.”

Istvan makes one a good argument: that we are confused about the point at which we become no longer human. For example, he notes that if we had all of our internal organs replaced many of us would probably find that acceptable. If, however, we replaced our arms and/or legs with super-charged titanium versions most would think we are crossing the line.

Here are some of my questions: If, within our current capabilities, we are unable to find happiness and fulfillment, then why should we expect to find it when we can throw a football a mile, or bench press a ton? What is to keep it from becoming two miles or two tons? Will we be happier or safer when the next Hitler can run faster, jump higher or live forever? Is the human propensity to foul things up going to go suddenly away when we can live forever?

One could argue that from the beginning of time, (literally) we have been obsessed with becoming God. As god-like as we have become over the centuries, it appears that we are still no closer to knowing what it means to be human or finding meaning in our lives. It seems that might be something we should get a grip on before moving on to the divine.

Bookmark and Share