For some, the great leaps being made in the field of artificial intelligence will only benefit mankind. But the possibility of thinking machines has others worried.
By Zeid Nasser
Artificial Intelligence (AI) is all around us, whether in the form of Apple’s Siri personal assistant, Facebook’s newsfeed algorithm, or Google’s uncanny ability to find exactly what we’ve been searching for even if we didn’t made it entirely clear.
Until recently, AI was the preserve of science fiction writers and ‘futurists’ who often worried about self-aware machines rising up against their makers. Since the 1960s, this dark theme has been explored in countless movies like 2001: A Space Odyssey, The Terminator, and Bladerunner.
But now even eminent scientists like Stephen Hawking are warning of AI’s potential dangers. “The development of full artificial intelligence could spell the end of the human race,” he said. “It will take off on its own and redesign itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.”
To adopt a more sober approach to this conversation, we need to understand why advanced AI is now a reality. Kevin Kelly, the founding editor of Wired magazine, holds three scientific breakthroughs responsible: Affordable parallel computation, big data, and advanced algorithms. Further research into these broad points supports this conclusion.
Just like human thinking requires billions of neurons to work simultaneously to create synchronous waves of computation in the brain, a similar parallel process is simulated in ‘neural networks’ which are the primary architecture of AI software. Each node of a neural network loosely imitates a neuron in the brain, mutually interacting with its neighbors to make sense of the signals it receives.
Since computing started, typical processors could only ping one thing at a time. That began to change when a new kind of chip, called a Graphics Processing Unit (GPU), was devised for the intensely visual and parallel demands of video games, in which millions of pixels had to be recalculated many times a second. Five years ago, scientists realized that GPU chips could run neural networks in parallel.
Today, neural nets running on GPUs are used by cloud-enabled companies. This is why Facebook can now develop various AI technologies such as the ability to identify embarrassing photos and ask users if they really want to post them on the social network. Facebook calls it the “deep learning system.”
This brings us to the next major development: Big data. Intelligence has to be taught. The human brain needs to see examples before it can understand, then distinguish between things. The same applies to artificial minds. That’s why supercomputers had to play at least a thousand games of chess before they became good at it. Now, though, with the vast amounts of data we have collected, ‘feeding intelligence’ to AI systems has become easier.
It’s frightening to consider the self-learning potential of a machine when we command it to sift through terabytes of data, studying two decades worth of search results chronicling thousands of years of human existence. The machine may end up knowing us more than we know ourselves.
Finally, there’s the advancement in algorithms to consider. Deep-learning algorithms accelerated enormously when they were ported to GPUs. The algorithmic code of deep learning alone is insufficient to generate complex logical thinking, but it’s an essential component along with massive information databases for all current AIs to function. Examples of this today include Amazon Web Services, YouTube’s recommended videos that match your taste, and the relevance of Google’s search engine results.
On the subject of Google’s future strategy, when you search for something and then click on the most relevant result, you’re actually teaching Google AI. That’s why Google’s biggest product will soon be an AI system, not its search system.
Another prominent personality is worried along with Hawking. Elon Musk, cofounder of PayPal, creator of Tesla and chief executive of rocket-maker Space X, warns that: “AI is our biggest existential threat. So we need to be very careful. I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.”
Countering this concern is the more positive view taken by some experts that AI systems can be taught compassion and cooperation. Why, they ask, would an emotionally mature machine attack us?
Perhaps the worry is that we, as the teachers of machines, are worried we’ll pass on our own self-destructive tendencies.