Google Lens is here and it promises to do no less than change the way you find information about the world around you.
Announced in May by the world’s biggest search engine, Lens lets you find details on different things in the real world by pointing your phone’s camera at them. Take a picture of a book and info about how to buy it, who published it and recent reviews pop up.
There’s one hitch: the software only works — at least for now — on Google’s new Pixel 2 phones, which hit store shelves on Thursday. Google is calling it a “preview” — and says the software will work with other phones running Google’s Android software someday, but it’s not saying exactly when. And it will also work — someday — on Apple’s iPhone, Aparna Chennapragada, who heads up Google Lens, said in an interview last week.
This isn’t the first time Google has tried to develop a camera-based search product. Google Goggles, a visual search app for Android, was released in 2012 but hasn’t been updated since 2014. Google Glass, also introduced in 2012, layered information and graphics over your eyes when you wore a headset, but the $1,500 device failed spectacularly before it left the prototype stage because people worried about privacy and piracy.
But Google Lens feels like the search giant’s first true entry into augmented reality, layering digital graphics over the real world. Lots of things have changed since those previous, ill-fated attempts. Voice and facial recognition have gotten a lot better. Smartphones can handle more intense computing demands.
“We thought, can you have the camera be the browser for the world around you?” Chennapragada says.
And Google’s rivals are in the fray too. Last week, Snapchat, which pioneered how teens and young adults currently use AR, unveiled Context Cards, a sort of visual search that links people with restaurant reviews or Lyft rides when they see an image in their snaps. Apple has an AR platform called ARKit for software makers to develop apps for iPhones. And Facebook has a similar platform called Camera for developers to create apps for its social network.
But while the rest of the tech giants are trying out similar projects, Google hopes its 19-year history as a search company will give it a leg up. “Google arguably has the best track record to have something to build on,” says Jan Dawson, principal analyst at JackDaw Research. “It looks very compelling.”
When Google first showed off Lens, one of the big crowd-pleasers was a video of someone taking a picture of a Wi-Fi password on a router, with Lens automatically connecting the phone to the network. Chennapragada says that while her team had big ambitions around image recognition and visual search, many users were most excited about having an easy way to cut and paste anything you see on your phone’s camera. Another Lens feature lets you take a photo of a business card and extract the name, email and phone number from it. “It’s the first time you can actually bridge the real world to your phone in a really interesting way,” she added.
Lens isn’t an app or product on its own but a feature that’s going to be built into several Google services. The first app to get Lens is Google Photos. Here’s how it works: First, take a photo. When you view the picture, you’ll see the Lens icon at the bottom of the screen (it looks kind of like the Instagram logo, but in black and white).
It works well with books. I took a photo of “The Facebook Effect” by David Kirkpatrick, and Lens showed me author and review information.
If you take a picture of something with little or no text on it, the software will do its best to figure out what’s in the picture. I took a photo of the unfinished Salesforce Tower outside my office window in San Francisco (admittedly, probably an unfair test, since the tower is still under construction). Lens couldn’t identify it specifically and instead pulled up pictures of other metallic skyscrapers.
In another test, I took a photo of the Bart Simpson figurine on my desk. Lens identified the object as “figurine” and surfaced pictures of other toy figures with similar colors, including two elves and Foghorn Leghorn from Looney Toons. But the list did include one photo of a Bart Simpson figurine.
The software still needs to improve, but its attempts were valiant.
And even though it’s still early, it’s important for Google to get it right. “People get jaded,” Dawson says. “They try something that doesn’t work, and trying to convince them to come back is difficult.”
Google knows how crucial that is. Chennapragada said Google is taking its time with the rollout — both with the types of devices Lens will be on and the number of Google services in which Lens will be built. “The rollout is in proportion to the capabilities,” she says.
The second Google product Lens will be added to is Assistant, the search giant’s digital helper, akin to Amazon’s Alexa and Apple’s Siri. That update is happening “in coming weeks.” Google demoed that for me last week. The biggest difference is, instead of taking a picture and filling up your camera roll, you’ll be able to point your camera at an object and do a visual search in real time. Then you’ll be able to ask follow up questions.
So what’s next? Chennapragada won’t say but hints that Google Maps could be a good contender. For example, you could point your camera at a storefront and see ratings and menu information. (Google already teased this in a video in May but didn’t give live demos.)
Google also eventually wants to make Lens a tool for discovering new content, instead of just figuring out what’s in front of you. “It’s not just to say, ‘What is this?’ But, ‘Give me ideas related to this. What else can I do? Give me ideas and inspiration,'” Chennapragada says.
Of course, while all of this is centered on your phone — for now — the real promise of AR is when it might come to smart glasses. Facebook says it’s working on a pair, though it’ll take years. Google has a project called Aura, sort of a reincarnation of Google Glass.
When I ask about AR glasses, Chennapragada downplays the question. She says there’s still a lot that needs to happen to realistically enable that type of form factor — techspeak for the physical design of a device.
“Being able to easily overlay reality in a seamless, frictionless way — augmenting using voice and vision — is a very key component of anything in that form factor, but there are other challenges in terms of that falling into place,” Chennapragada says.
Still, she acknowledges where Google Lens and its technology could eventually go.
“But anything we do here will be a building block,” she says. “These will be the building blocks for anything we do in terms of future form factors as well.”
Virtual reality 101: CNET tells you everything you need to know about VR.
Batteries Not Included: The CNET team reminds us why tech is cool.