Google Photos is getting one more tool to make your images look better: the artificial intelligence-powered Lens. According to a tweet from the official Google Photos Twitter account, the new feature was made available for every Android phone starting on Tuesday.
The Google Lens feature was first announced at Google’s I/O developer conference last year, along with Android Oreo and other artificial intelligence updates. The feature makes it easier for users to get information on a range of subjects just by pointing their device’s camera at the object.
Rolling out today, Android users can try Google Lens to do things like create a contact from a business card or get more info about a famous landmark. To start, make sure you have the latest version of the Google Photos app for Android: https://t.co/KCChxQG6Qm
Coming soon to iOS pic.twitter.com/FmX1ipvN62
— Google Photos (@googlephotos) March 5, 2018
It also helps in identifying and translating text written in languages that users do not speak. Google Lens offers more capabilities that will only become clear once people start using it. Like Alexa, Google Lens uses machine learning and image recognition to identify the object in the photo and come up with information about it.
In addition to monuments and buildings, Google Lens can also scan business cards and pull relevant information. Users can learn more by clicking or tapping in different ways on the card. For instance, when Lens scans the business card, it will recognize the contact’s email address, phone number and job title. Further, it will help save the information on the phone as a new contact with a single tap, saving you from typing in all the information.
With Lens, the search giant aims to offer people a better option than Snapchat’s photo filters or Pokemon Go creatures, which many believe to be AR.
“The camera is a medium for understanding what the world is,” Aparna Chennapragada, who heads up Google Lens, told CNET earlier. “It’s an evolution of information, discovery and search.”
Initially, Lens was restricted only to Pixel users. This limited its reach, as the Pixel did nothing remarkable in terms of sales, shipping only 3.9 million units last year, according to IDC. In comparison, Apple sold 77.3 million iPhones in the last quarter alone. Thus, it made sense to open Lens to the rest of the Android platform by offering it on more devices via Google Photos.
Google Lens will now be available on flagship devices from Samsung, Motorola, LG, Huawei, Sony and Nokia. However, to use the feature, the device’s camera must have certain minimum specifications, which all phones might not have.
Google Lens is one of Google’s most ambitious projects in the augmented reality space yet. Arch rivals like Facebook are also going big in the augmented reality field with camera features to allow developers to make AR apps and games for the platform. Similarly, Apple offers ARKit for developers to create AR apps for iPhones. Meanwhile, Google has its own SDK for building AR Android apps known as ARCore, which was released only recently.
Google’s augmented reality developer kit can work on 100 million Android smartphones across the globe. At the time of launch a couple of weeks ago, the search engine giant stated that the kit supports about 13 models from five OEMs: Pixel, Pixel XL, Pixel 2, and Pixel 2 XL; Galaxy S8, Galaxy S8+, Galaxy Note 8, Galaxy S7, and Galaxy S7 Edge; Asus’ ZenFone AR, OnePlus 5, and LG V30 and V30+. At the time, the search giant was also in talks with other Android OEMs, such as Huawei, Xiaomi, HMD and Motorola, to support the kit.
Google’s ARCore will help developers increase the environmental adaptability, allowing users to place virtual objects on textured surfaces such as toy boxes, furniture, posters, cans and so on. Google also announced that it will roll out ARCore in China by partnering with Huawei, Samsung and Xiaomi to publish AR-based apps on their proprietary app stores.
Other than Lens, Google is also expanding the use of artificial intelligence in the defense field. The company was able to secure a contract to work on the Defense Department’s new algorithm-based warfare initiate, offering AI assistance for drone targeting. The project was routed through northern Virginia technology staffing company ECS Federal, notes Gizmodo.
With the contract, the Pentagon is keen to take benefit from state-of-the-art artificial intelligence technology to improve combat performance. Google has deployed cross-team collaboration to work on the AI drone project. According to The Intercept, the team will work toward enhancing deep learning technology to help drone analysts process vast amounts of image data from the military’s fleet of 1,100 drones to better target bombing strikes against the Islamic State.