Tag Archives: NASA

The Robo-Apocalypse. Part 2.

 

Last week I talked about how the South Koreans have developed a 50 caliber toting, nearly autonomous weapon system and have sold a few dozen around the world. This week I feel obligated to finish up on my promise of the drone with a pistol. I discovered this from a WIRED article. It was a little tongue-in-cheek piece that analyzed a YouTube video and concluded that pistol-packing drone is probably real. I can’t think of anyone who doesn’t believe that this is a really bad idea, including the author of the piece. Nevertheless, if we were to make a list of unintended consequences of DIY drone technology, (just some simple brainstorming) the list, after a few minutes, would be a long one.

This week FastCo reported that  NASA held a little get-together with about 1,000 invited guests from the drone industry to talk about a plan to manage the traffic when, as the agency believes, “every home will have a drone, and every home will serve as an airport at some point in the future”. NASA’s plan takes things slowly. Still the agency predicts that we will be able to get our packages from Amazon and borrow a cup of sugar from Aunt Gladys down the street, even in populated areas, by 2019.

Someone taking action is good news as we work to fix another poorly conceived technology that quickly went rogue. Unfortunately, it does nothing about the guy who wants to shoot down the Amazon drone for sport (or anyone/anything else for that matter).

On the topic of bad ideas, this week The Future Of Life Institute, a research organization out of Boston issued an open letter warning the world that autonomous weapons powered by artificial intelligence (AI) were imminent. The reasonable concern here is that a computer will do the kill-or-not-kill, bomb-or-not-bomb thinking, without the human fail-safe. Here’s an excerpt from the letter:

“Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.” [Emphasis mine.]

The letter is short. You should read it. For once we have and example of those smart people I alluded to last week, the ones with compassion and vision. For virtually every “promising” new technology—from the seemingly good to the undeniably dangerous) we need people who can foresee the unintended consequences of one-sided promises. Designers, scientists, and engineers are prime candidates to look into the future and wave these red flags. Then the rest of the world needs to pay attention.

Once again, however, the technology is here and whether it is legal or illegal, banned or not banned the cat is out of the bag. It is kind of like a nuclear explosion. Some things you just can’t take back.

Bookmark and Share