Google Images is getting a highly advanced update thanks to Artificial Intelligence integration.
On Thursday, Google announced a new feature, that it’s bringing its A.I.-powered Lens technology to Google Image Search. The feature allows users to search for targeted objects within an image, such as a flower bouquet in a wedding picture. Users only need to trace a line around the specific object to search for similar images, which may have external links such as interior design blogs and floral sites. The feature is currently only available for smartphone users.
It is a glimpse of a bigger Google initiative to create more means for users to search for info outside the need of entering text and words into the Google’s main search engine, according to Cathy Edwards, Google Images Engineering Head.
The new-and-improved Google Images is now charged by the Lens technology. This tech was first revealed in 2017 and is derived from deep learning that trains computers to identify objects in images, such as telling users the breed of their adorable dogs, by performing background tasks such as studying their pet pictures.
According to Edwards, Google is integrating Lens technology to make Google Images easy to use. Now, there will be a small lens-shaped icon under the pictures people select to search from Google Images. Clicking on that icon will make small dots appear on individual objects in the picture that the technology identifies, and selecting one of the small-dots portions will search for similar object images around the web.
People can also trace the objects within in the photos, for the tech to identify and search through the web, using their fingers. This new AI integration is Google’s attempt of competing with its social photo websites rivals such as Instagram and Pinterest, which has developed means for users to do more by using their services, such as going to retail websites, which maybe selling the same sneakers shown in the images.
Below is a demo of how the lens technology works, courtesy of TechCrunch:
According to Edwards, some people use Google Images for DIY home projects ideas, while others wish to see car engines images to assist them with their auto maintenance.
“We have an enormous number of queries for people who want to fix their cars and they want to identify car parts,” she said.
Even though the latest attribute should be capable of identifying basic objects in the images, with time, it should be capable of recognizing more cutting-edge matters such as particular kinds of flower and other significant landmarks, she claimed. Nonetheless, this attribute won’t work with debatable material like pornography, she clarified.
But, integrating more attributes into Google Images paves the way for potential problems with the company’s AI tech. For example, in 2015, Google Photos were troubled for identifying African-Americans as gorillas, highlighting the glitches with AI comprehension of what’s in the picture and accidental unfairness in the raw data used for training.
Edward claimed that Google has been experimenting the new attribute for unfairness, and is utilizing AI technology to potentially reduce the problems. The latest product will be released first in the U.S and then abroad to figure out any potential glitches.