A Beginner’s Guide to Artificial Intelligence

What’s AI?

Confused about what artificial intelligence is? You’re in good company.

When Stanford University released its first report for the One Hundred Year Study, a long-term look into the future of artificial intelligence, the panel acknowledged: “there is no clear definition of AI (it isn’t any one thing).”

And while this ambiguity will make it hard to regulate, the authors contend, those same vagaries might also help the field grow. “The lack of a precise, universally accepted definition of AI,” they wrote, “probably has helped the field to grow, blossom, and advance at an ever-accelerating pace.”

Of course, a definition would be helpful. So the authors look to Nils J. Nilsson’s in his book The Quest for Artificial Intelligence: A History of Ideas and Achievements.

“Artificial intelligence is that activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to function appropriately and with foresight in its environment,” he wrote.

Still, the definition lacks much in the way of specificity – even an electronic calculator falls in the spectrum of function and foresight – so it’s best to look at what AI is doing if we want to understand what it is.

What’s AI Like Today?
The computers haven’t taken over the world, but artificial intelligence is already part of our everyday lives.

Although most of us haven’t taken a ride in a self-driving car, we benefit from AI through apps like Uber and Lyft that use algorithms to connect drivers to passengers.

We don’t have robotic assistants, yet, but we use AI assisted software like Siri and Google Now.

AI is also used in e-commerce, customer service, and financial services.

IBM’s cognitive computing system, Watson, is best known as a Jeopardy! winner, but Watson is also used in day-to-day data analytics in marketing, and research and diagnostic assistance at hospitals for physicians.

Google’s AI made news by beating the world Go champion, but its computing is also being used to answer email in Inbox, identify photos in Google Photos, and schedule appointments in G Suite, formerly Google Apps for Work.

What’s the Future of AI?
The International Data Corp predicts the market for AI will grow to $16.5 billion in 2019 – up from $1.6 billion in 2015. Experts expect more automation and more reasons for businesses to use the technology.

Several industry leaders recently joined forces to determine best practices and alleviate public concerns. In September, Facebook, Amazon, Google, IBM and Microsoft recently announced the Partnership on AI.

“The power of AI is in the enterprise sector,” Francesca Rossi, an AI ethics researcher at IBM Research, said. “For society at-large to get the benefits of AI, we first have to trust it.”

Separately, Microsoft has announced the creation of their own artificial intelligence research group.

As AI grows, concerns over privacy, surveillance, and biases will only grow stronger.

Jobs – particularly low-skilled positions – might go to software or robots run by artificial intelligence. And it’s safe to assume governments will struggle to keep up with AI’s increasing prevalence in society.

But the benefits of AI far outweigh the disadvantages, the Stanford University panel authors argue, and many others agree.

“If society approaches these technologies primarily with fear and suspicion, missteps that slow AI’s development or drive it underground will result, impeding important work on ensuring the safety and reliability of AI technologies,” the study’s authors write.

“On the other hand, if society approaches AI with a more open mind, the technologies emerging from the field could profoundly transform society for the better in the coming decades.”

Still, if you’re looking for a doomsday scenario, you don’t have to rely on the cranks and conspiracy-minded. No less an authority than Elon Musk is on-the-record with his concern that increasingly powerful AI could prove apocalyptic for humanity.

Key Terms
Turing Test: a test for intelligence in a computer, developed by Alan Turing in 1950, which determines whether or not a computer can be said to think like a human.

Machine Learning: a subfield of artificial intelligence with the goal of enabling computers to learn from data with minimal programming.

Deep Learning: a branch of machine learning that carries out the machine learning process using neural networks.

Neural Network or Artificial Neural Network (ANN): a system that mimics the networks or layers of neurons inside a human brain.