I’e asked this question before, but this is an entirely new answer.
We may never know.
Based on a plethora of recent media on artificial intelligence (AI), not only are there a lot of people working on it, but many of those on the leading edge are also concerned with what they don’t understand in the midst of their ominous new creation.
Amazon, DeepMind/Google, Facebook, IBM, and Microsoft teamed up to form the Partnership on AI.(1) Their charter talks about sharing information and being responsible. It includes all of the proper buzz words for a technosocial contract.
“…transparency, security and privacy, values and ethics, collaboration between people and AI systems, interoperability of systems, and of the trustworthiness, reliability, containment, safety, and robustness of the technology.”(2)
They are not alone in this concern, as the EU(3) is also working on AI guidelines and a set of rules on robotics.
Some of what makes them all a bit nervous is the way AI learns, the complexity of neural networks and inability to go back and see how the AI arrived at its conclusion. In other words, how do we know that its recommendation is the right one? Adding to that list is the discovery that AIs working together can create their own languages; languages we don’t speak or understand. In one case, at Facebook, researchers saw this happening and stopped it.
For me, it’s a little disconcerting that Facebook, a social media app is one of those corporations leading the charge and leading the research into AI. That’s a broad topic for another blog, but their underlying objective is to market to you. That’s how they make their money.
To be fair, that is at least part of the motivation for Amazon, DeepMind/Google, IBM, and Microsoft, as well. The better they know you, the more stuff they can sell you. Of course, there are also enormous benefits to medical research as well. Such advantages are almost always what these companies talk about first. AI will save your life, cure cancer and prevent crime.
So, it is somewhat encouraging to see that these companies on the forefront of AI breakthroughs are also acutely aware of how AI could go terribly wrong. Hence we see wording from the Partnership on AI, like
“…Seek out, support, celebrate, and highlight aspirational efforts in AI for socially benevolent applications.”
The key word here is benevolent. But the clear objective is to keep the dialog positive, and
“Create and support opportunities for AI researchers and key stakeholders, including people in technology, law, policy, government, civil liberties, and the greater public, to communicate directly and openly with each other about relevant issues to AI and its influences on people and society.”(2)
I’m reading between the lines, but it seems like the issue of how AI will influence people and society is more of an obligatory statement intended to demonstrate compassionate concern. It’s coming from the people who see huge commercial benefit from the public being at ease with the coming onslaught of AI intrusion.
In their long list of goals, the “influences” on society don’t seem to be a priority. For example, should they discover that particular AI has a detrimental effect on people, that their civil liberties are less secure, would they stop? Probably not.
At the rate that these companies are racing toward AI superiority, the unintended consequences for our society are not a high priority. While these groups are making sure that AI does not decide to kill us, I wonder if they are also looking at how the AI will change us and are those changes a good thing?
Other pertinent links: