Google has come up with a new feature for its messaging app Allo based on its machine learning capabilities. On Thursday, the search giant rolled-out a new auto-illustration feature based on neural networking that gives the user complete freedom of expression by turning their face into a pack of stickers they can paste into any conversation.
Allo uses neural networking to turn selfies into stickers
Google introduced the personalized sticker packs in the most recent update to its Allo messaging app. The feature is similar to one found in Bitmoji, but with one difference. In Bitmoji, you have to manually create your own illustrated face, whereas in Allo, the app does all the work, notes droid-life.
The user just clicks a selfie in the messaging app, and then the rest of the work is done by Google’s neural networks. You will be presented with your own personal sticker pack in just a few minutes. According to the company, its messaging app Allo will return “an automatically generated illustrated version of you, on the fly, with customization options to help you personalize the stickers even further.”
A decade ago, no one talked about tail risk hedge funds, which were a minuscule niche of the market. However, today many large investors, including pension funds and other institutions, have mandates that require the inclusion of tail risk protection. In a recent interview with ValueWalk, Kris Sidial of tail risk fund Ambrus Group, a Read More
Allo debuted in May, almost four months after its formal launch. With this messaging app, Google aims to build a smart app that connects users with their friends and assists them in planning events and finding information, notes VentureBeat.
How this feature works
Instead of adopting the common computer vision approach to replicate pictures by analyzing them on a pixel-by-pixel basis, Google’s new algorithm picks out features unique to an individual and works with a team of illustrators to create drawings which represent several other features, like hairstyles, eyes, backgrounds, etc., notes VentureBeat.
To figure out the closest approximation possible for the looks of a user, the new feature uses many of the same conventions as Google’s Deep Dream technology. In addition, it uses several artist-created resources to create cartoon representations of the user. According to Android Headlines, the cartoon and pool-style resources used to represent a user are drawn through a long process which ends with the artist choosing from a few different pictures of a user with a common feature.
Google has not yet announced when this feature will be available, but it is expected to come first on the Android version of Allo and then to the iOS version.
How Google found this feature
While using the large neural network with no specific purpose in mind, the tech company found that certain points in networks had the gift of being able to notice small details outside of their assigned purview. The tech giant then worked on those nodes to help them see the bigger picture. The search company enabled the neural networks to see details of a given selfie and use those details to come up with a good conclusion, notes Android Headlines.
Google is known for experimenting with AI, and just last month, the search giant unveiled AutoDraw as an AI experiment that allows everyone to create art based on a simple hand-drawn doodle. The search company is also working on research to let computers generate sketches which look similar to sketches created by humans, notes Venture Beat.