New Research Teaches Computers To Learn Like Humans

Updated on

HAL 9000 from 2001: A Space Odyssey is almost here.

In another major step forward in artificial intelligence research, computer scientists have developed an algorithm that allows computers to recognize simple visual concepts in just one or two attempts like human beings can.

The new algorithm mimics human learning abilities, enabling computers to recognize and draw simple visual concepts much like humans. The ground-breaking research, published in the most recent issue of  Science, is a major advance, as it dramatically reduces the time it takes for artificial intelligences to understand new concepts and broadens their application to a range of more creative applications.

Statement from lead author

“Our results show that by reverse engineering how people think about a problem, we can develop better algorithms,” notes Brenden Lake, a Moore-Sloan Data Science Fellow at New York University and the lead author of the research. “Moreover, this work points to promising methods to narrow the gap for other machine learning tasks.”

Computers learning like humans is a major step forward for AI

The other authors were Ruslan Salakhutdinov, an assistant professor of Computer Science at the University of Toronto, and Joshua Tenenbaum, a professor at MIT in the Department of Brain and Cognitive Sciences.

Human beings typically need only one or two examples when they see a rare breed of dog or a new dance move, or even a letter in an unknown alphabet, before they have a basic understanding of the “object” and can typically recognize new instances of the “object”. Computers today can replicate some of the pattern-recognition tasks that humans can do, such as ATMs identifying the numbers written on a check, but computer software usually needs hundreds or even thousands of examples to perform as accurately as humans at complex pattern recognition.

The goal of the new research was to significantly shorten the artificial intelligence  learning process and try to make it closer to how humans acquire and apply new knowledge. That meant the goal was making it possible to learn from a small number of examples and then being able to perform related conceptual tasks, such as generating new examples or generating whole new tangentially related concepts.

Lake and colleagues applied a ‘Bayesian Program Learning’ (BPL) framework, meaning concepts are represented as basic computer programs. For instance, the letter ‘A’ is represented by computer code that produces examples of the letter when the code is run. However, a programmer is not actively involved in the learning process; the algorithm “programs itself” by using code to create the letter it sees. Of note, standard computer programs reproduce the exact same output every time they run, but probabilistic Bayesian programs produce different outputs for every execution. This allows them to capture the way instances of a concept may vary, such as the differences between how two people write a particular letter or number or two different sizes of nails.

Traditional pattern recognition algorithms today represent concepts as configurations of pixels or collections of features, but the Bayesian approach learns “generative models” of processes in the real world, so learning is like ‘model building’ or ‘explaining’ the data provided to the algorithm. Bayesian probabilistic systems are designed to include both the causal and compositional properties of real-world processes/objects, so the algorithm can use data more efficiently.

Of interest, the new Bayesian AI model also “learns to learn” by using knowledge related to earlier learned concepts to speed learning on new concept. For example, using knowledge of the Latin alphabet to learn letters in the Greek alphabet or Cyrillic. The researchers tested the new algorithm on more thn 1,600 types of handwritten characters in 50 global writing systems, including Chinese, Sanskrit, Tibetan, Gujarati, Glagolitic, and even some “made-up” symbols from popular culture.

“Before they get to kindergarten, children learn to recognize new concepts from just a single example, and can even imagine new examples they haven’t seen,” explains co-author Tenenbaum. “I’ve wanted to build models of these remarkable abilities since my own doctoral work in the late nineties. We are still far from building machines as smart as a human child, but this is the first time we have had a machine able to learn and use a large class of real-world concepts—even simple visual concepts such as handwritten characters—in ways that are hard to tell apart from humans.”

Leave a Comment