Zuckerberg Riffs On AI At Tuesday Facebook Townhall Q&A

Published on

Facebook founder and CEO Mark Zuckerberg has been criticized for being too standoffish, so he has been making a big effort to change that perception over the last year or two. One very well received part of his “Zuckerberg is a nice guy” campaign has been the regular Townhall Q&A sessions he holds via online chat. Participants are welcome to ask Zuckerberg any questions they have related to the company, its plans or even advice about exercise, and he does his best to answer.

Facebook’s Zuckerberg on AI

In a Townhall Q&A session Zuckerberg hosted on Tuesday of this week, the Facebook founder and CEO riffed about artificial intelligence more generally before focusing on why the social media company is developing AI tools for facial and voice recognition.

“We’re working on AI because we think more intelligent services will be much more useful for you to use,” Zuckerberg commented.

He also pointed out that this kind of AI could notify you if someone posts a pic with you in it or allow people to quickly find images and posts related to a search topic. “Similarly, if we could build computers that could understand what’s in an image and could tell a blind person who otherwise couldn’t see that image, that would be pretty amazing as well. This is all within our reach and I hope we can deliver it in the next 10 years.”

Analysts point out that Facebook has a number of AI-related projects designed to improve the services the firm offers. It operates three AI labs researching how to use deep learning in voice translation, image recognition and more in New York, Silicon Valley and Paris. Of note, Zuckerberg’s company acquired voice-recognition AI startup Wit.ai in the first quarter of this year.

Zuckerberg also offered some insight into the social media giant’s future plans for AI:

“In order to do this really well, our goal is to build AI systems that are better than humans at our primary senses: vision, listening, etc. For vision, we’re building systems that can recognize everything that’s in an image or a video. This includes people, objects, scenes, etc. These systems need to understand the context of the images and videos as well as whatever is in them. For listening and language, we’re focusing on translating speech to text, text between any languages, and also being able to answer any natural language question you ask.”

Leave a Comment