Facebook Claims “Near-Human” Understanding With DeepText

Updated on

Deep learning and machine learning are here to stay until the machines take over and we’re left to serve our robot overlords. No one reading this will be around for this day, but in it’s infancy deep learning has moved by leaps and bounds to make machines learn from us and from themselves. Speech recognition is almost done, now it’s time for context and Facebook announced big progress today.

Facebook Claims "Near-Human" Understanding With DeepText

Context is key with DeepText

It you think about it voice recognition wasn’t that good just three years ago. That’s largely been fixed and it’s time for Facebook, Microsoft, Google, Microsoft, Amazon and others to really concentrate on context. That’s not to say that each of the companies haven’t made big strides already as demonstrated by the improvements in Google Now and Siri that Google and Apple seemingly demonstrate with each update to their mobile operating systems.

Amazon’s creation of the Alexa operating system to power its standalone speaker Amazon Echo and others is a testament to the company’s own work in the arena. So much so, that Invoxia became the first company to create a third-party product powered by Alexa in its “21st century kitchen magnet” that is also a speaker that relies on a user’s voice to offer so much than simple music.

Google has announced that it will launch its own speaker called “Home” soon and it’s widely anticipated that Apple plans to unveil a large-scale expansion of Siri at the WWDC in San Francisco next week including offering Siri for desktops and laptops to Mac OS (if they do away with OSX as anticipated.) Each of these examples show that these companies are making genuine progress in getting machines to realize what you mean, not simply the words that are coming out of your mouth.

Enter Facebook with DeepText

Facebook said yesterday that “DeepText” can plow through texts in more than twenty languages and process the context of thousands of texts per second with “near-human” accuracy.

“Traditional techniques require extensive preprocessing logic built on intricate engineering and language knowledge. There are also variations within each language, as people use slang and different spellings to communicate the same idea,” Facebook wrote in its blog post. “Using deep learning, we can reduce the reliance on language-dependent knowledge, as the system can learn from text with no or little preprocessing. This helps us span multiple languages quickly, with minimal engineering effort.”

There is still a lot of nuance and context in both speech and the written word that each company will have many hurdles to get over before our devices can truly understand us. With speech, for example, we’re likely quite a bit away from speaking to my Amazon Echo with a mouth full of food while I’m also angry at my girlfriend and it being able to understand what I’m on about at the time.

Facebook, and others, are essentially not just forced to understand context but what it can do to about it and what I want it to do to help me.

“For example, someone could write a post that says, ‘I would like to sell my old bike for $200, anyone interested?’ DeepText would be able to detect that the post is about selling something, extract the meaningful information such as the object being sold and its price, and prompt the seller to use existing tools that make these transactions easier,” Facebook added in a statement.

Deep learning is certainly not just for speech and text

Microsoft launched a project with Delft University, a museum and the private sector that aims to see deep learning machines using algorithms along with an advanced 3D-printer that looks as though the final product was a portrait done by Rembrandt. Just this week, Google Brain announced “Project Magenta” that aims to see if machines can produce original artwork and music on their own. You wouldn’t think music would be that big an ask given that the music industry has been using it own algorithms in the form of focus groups for over a decade to produce some truly awful noise

“It’s not a true reconstruction of what neurons do. But it’s an abstract notion of how we believe neurons work in the brain,” Jeff Dean, a computer scientist who heads the Google Brain project, recently told  Wired magazine. “If you have lots and lots of these neurons and they’re all trained to pick up on different types of patterns, and there are other neurons that pick up on patterns that those neurons themselves have built on, you can build very complicated functions and systems that do pretty interesting things.”

Facebook may have it easy with DeepText and it sounds as if the company has already made some big strides that will only be improved on over time.

Leave a Comment