Environmental Internships
Daily Lobo Logo
Clear, 41°F
7 day forecast
Thursday, December 18, 2014

The Peer Review

Artificial intelligence is on the rise, is our human society ready for it?

Remember Clippy, Microsoft Word’s talking paperclip assistant? Let me guess – even after all these years you still feel an inexplicable hatred for that intrusive, know-it-all piece of metal? His contemptuous attitude aside, we can all agree that Clippy’s redundant advice contributed nothing to our word processor experience.

Thankfully, Clippy won’t be back to haunt our monitors anytime soon. That’s because since Clippy’s demise at the turn of the century, artificial intelligence (AI) technologies opted to ditch the anthropomorphic office supplies for some truly impactful progress.

From smartphones to military drones, sophisticated AI technology is now shaping society in many ways we’re largely unaware of. For example, when we longingly perform a Google search for ‘What should I do over UNM spring break,’ the search engine draws upon decades of semantic and machine learning research to deliver a customized, relevant result.

Except that after some more complex searches you receive just a disappointing link to an illiterate user’s Yahoo Answers post. But Google’s recent high-profile acquisition of several pioneering AI companies means that things are about to get much more interesting for the web consumer. Some experts predict that AI research is about to experience exponential growth in the coming years, bringing us face to screen with highly intelligent, engaging computers who can better understand what we want.

Does this spook you a little bit? You’re not alone. But before we decide whether or not we’re all doomed for the robot apocalypse, let’s first take a step back and understand the basics of artificial intelligence.

Artificial intelligence is the capacity of a machine or software to analyze its surroundings and act appropriately in order to succeed in that environment. Artificial intelligence is thus a highly multidimensional phenomenon, and can be at its simplest just a computer game that solves mazes.

However, decades of AI research were built on the prediction that computers would one day achieve ‘strong AI’, or levels of intelligence hitherto attributed exclusively to humans. In order for this to occur, computer scientists are modeling computer understanding after our own.

Conventional computer systems were restricted by what they were taught, or what information was programmed into them. It was thus by accordance to a set of rules that they recognized objects like words or pictures.

This became ineffective when the computer failed to identify an object—say, a photo of a butterfly—because the image wasn’t quite similar enough to the pictures of butterflies in its database. Humans would undoubtedly recognize a butterfly regardless of whether or not we’ve seen a very similar one before. Thus, human cognition is an entirely different sort of processing because we have the innate ability to learn and develop our mental classification of ‘butterfly’.

Now, the latest computers are beginning to learn in a similar way. Researchers at Google’s secretive X Lab and others across the country are developing computer systems containing large groups of neuron-like ‘neuromorphic processors’ adjoined by thousands of synapse-like wires. These complex circuits are not programmed, but rather learn their own correlations from external data. Just like in humans, as the circuit consumes more data it grows more advanced and precise.

In 2012, Stanford Associate Professor Andrew Ng led a team at Google’s X lab to introduce a large neural network to YouTube. He told NPR, “The point was to have a software, maybe a little simulated baby brain, if you will, wake up, not knowing anything, watch YouTube video for several days and figure out what it learns.”

And it learned about the best of what YouTube has to offer: cats. The computer network was able to successfully identify heterogeneous images of cats with an unprecedented 75% accuracy, as measured by a consistent signal in an isolated part of the circuit.

As with every major breakthrough in computer science, the philosophical and technical critics abound. Stuart Russell and Peter Norvig, authors of Artificial Intelligence: A Modern Approach, counter that modeling computers after neural systems is a limited approach that could fail to provide us with strong AI. They claim, “the quest for ‘artificial flight’ succeeded when the Wright brothers and others stopped imitating birds and started … learning about aerodynamics’.

They’re absolutely justified in asserting that computers and brains will each miss their full potentials if they work strictly identically, but the rapid development of AI technology is largely in fields that require a more ‘human’ touch—like Internet searches, for example. In these cases, a cerebral model may be just what computer science needs in order to advance.

As artificial intelligence develops with the potential for seriously human-like machines, we laypeople are still left asking ourselves whether the robot apocalypse is impending. The short answer: the machines probably won’t revolt and conquer us. But even in a best-case scenario, will we end up outsmarting ourselves?

AI technology is poised to begin transforming how major industries run their businesses: for example, call centers could become automated operations and AI could alleviate physician shortage by automating the more tedious parts of clinical practice.

Naturally, we fear that new technology, especially technology modeled after humans, will one day render us entirely obsolete. MIT economist Erik Brynjolfsson, however, is optimistic about the impact of technology on the economy.

“Technology has always been creating and destroying jobs,” he explained in an interview with Scientific American. “But [in the past] it happened over a long period of time, and people could find new kinds of work to do. This time it’s happening faster.”

Brynjolfsson says that we simply need to identify ways by which humans are more valuable than machines, and determine how humans and machines could collaborate to maximize productivity.

His point is important because technology is useful as long as it improves our quality of life. We can’t expect a higher quality of life without also improving our work strategy—if we creatively adapt how we educate our society then we can succeed despite the demands of a more intelligent world.

In the meantime, we know that the emergence of artificial intelligence offers us as many challenges as solutions. At the very least, let’s be grateful that we aren’t adjusting to a world full of Clippy the Office Assistants.