AI-ception: Google’s AI Is Learning How To Make AI Software

Updated on

Google is already known to be a pioneer in artificial intelligence, introducing different machine learning techniques to enhance the AI experience. It’s no secret that AI has become so advanced that many people are worried over losing their jobs to AI. Now, scientists have found a way to make AI software using AI, so Google’s AI is learning how to make AI software using machine-learning algorithms. Sounds like a scenario for the Inception sequel.

Researchers at the Google Brain artificial intelligence research group conducted an experiment in which they designed machine-learning software which tests benchmark software which processes languages. The algorithm surpassed the published results which were designed by humans.

In recent months, other groups have managed to progress with making AI software learn to make learning software, which also includes researchers at the nonprofit research institute OpenAI, co-founded by Elon Musk, as well as MIT, the University of California Berkeley and Google’s other research group, DeepMind.

If the industry adapted to the new technique in which AI is learning how to make AI software, it would bring a huge enhancement to machine-learning software being used economically, because companies are paying a lot of money for skilled machine-learning experts, who are high in demand.

“Currently the way you solve problems is you have expertise and data and computation,” Jeff Dean, who leads the Google Brain research group said, at the AI Frontiers conference in Santa Clara, California, as quotes in MIT Technology Review press release. “Can we eliminate the need for a lot of machine-learning expertise?”

The researchers tested the software to make new learning systems for multiple different yet related problems, finding Google’s AI is learning how to make AI software. The software returned results of designs which could generalize and find new tasks with less training than would usually be required.

The idea of programming a software which is capable to “learn to learn” has been a challenge for a while, with previous experiments not yielding such great results.

“It’s exciting,” said Yoshua Bengio, a professor at the University of Montreal who tested the idea in the 1990s, adding that thanks to stronger computers available today and a machine-learning sub-technique called deep learning, the idea is able to work. However, even though Google’s AI is learning how to make AI software it requires a lot of computing power.

However, Oktrist Gupta, a researcher from the MIT Media Lab thinks that these requirements will change in the future, with MIT colleagues planning to open-source the AI software for their own experiments. The experiment would include programming deep-learning systems which could match those crafted by humans and use them for object recognition.

“Easing the burden on the data scientist is a big payoff,” he says. “It could make you more productive, make you better models, and make you free to explore higher-level ideas.”

Leave a Comment