Placeholder Image

Subtitles section Play video

  • - So we're here at day one of Google IO

  • checking out new features for Google Lens.

  • It's an AR and AI platform for the company,

  • and it's basically built into Google Assistant,

  • and now it's built right into the smart phone's camera.

  • So Google first introduced Google Lens last year,

  • and basically at the time, it was a way to look

  • through the camera's viewfinder

  • and identify objects in photos.

  • Now Lens is much more sophisticated.

  • It uses all Google's understanding

  • of natural language processing,

  • and object recognition, image recognition.

  • It combines it into one big platform.

  • So that the smart phone can see and understand

  • the world around it and it can parse human language.

  • Prior to today, Google Lens was only available within

  • Google Assistant.

  • Now it works right from the smart phone's camera

  • and it works in other devices.

  • Right here we have an LGG7

  • and we have a whole wall of props behind us

  • that we can use Google Lens to identify

  • and get information from Google Search.

  • There are three ways to access Google Lens.

  • The first is to just open the camera

  • and click the Google Lens button.

  • From there the phone starts looking

  • and trying to identify objects it sees through

  • the viewfinder.

  • The second way to access Google Lens

  • is basically just by touching and holding the home

  • button down here launching Assistant and just clicking the

  • lens button.

  • And as you can see right now,

  • Lens already sees and identifies objects

  • with these little colored dots,

  • that's how it knows what it is.

  • Tapping on one of the dots,

  • will pull up Google search results.

  • So you see it understands that this is an album

  • by Justice Woman and conveniently Justice happens

  • to be the artist performing at Google IO tomorrow.

  • And the third way to access Google Lens will

  • be a double tap on the camera button,

  • but that only works on the RGG7.

  • If you look at some of the clothing here,

  • whoop, doesn't quite identify the clothing,

  • but it asks if I like the clothing.

  • I guess it's trying to build a preference profile for me.

  • Let's try this one.

  • Whoop, there it goes, it pulled up shopping results

  • from Macy's, from QVC.

  • So it understands what this item of clothing is

  • and it then prompts you to buy it online.

  • Now as you scan Google Lens over other objects,

  • it'll slowly start to recognize everything else

  • that you pan it over.

  • So we have a piece of art right here,

  • that is not correct,

  • hold on.

  • Looking for results.

  • There we go.

  • So it went from the album,

  • but now it knows this is a painting by Pablo Picasso.

  • Right here it sees a photo.

  • And it knows that was a Norwegian Lundehund.

  • I don't think I pronounced that right,

  • but it is a dog breed and Google identified it.

  • So Google Lens isn't just for photos and objects.

  • You can do a lot with text now,

  • that includes text inside the book jacket of a book,

  • it includes text on menus at restaurants.

  • You can point the camera at a whole list of food items

  • and you can pull up images of those food items.

  • You can pull up YouTube videos of how to make them.

  • You can even translate those food items if they're

  • in another language into English or into Spanish

  • or into any other language that you want

  • that Google Translate supports.

  • Now if you're looking at a book,

  • for instance, like the book Swing Time by Zadie Smith,

  • you can look at huge passages of text,

  • you can even grab that text using Google Lens

  • and you can pull it out as if you had just copied

  • and pasted it from a document.

  • From there you can translate that text into another language

  • you can even then do Google searches on it.

  • Google Lens essentially takes text

  • from anywhere out in the world,

  • street signs, restaurant menus, even books

  • and it makes that text searchable.

  • Now the underlying technology behind Google Lens,

  • it isn't just for basically looking through a smart phone

  • viewfinder and looking at products or

  • trying to translate text.

  • What powers Google Lens is,

  • a lot of the foundational AI work that lets Google

  • do AR experiences.

  • So for instance, because Google's software

  • and the phones that power that software

  • can understand and see the world,

  • you can create whole virtual 3D images.

  • For instance, you can have paintings come to life

  • right out in front of you and you can walk around,

  • you can even see the reflections of objects behind you

  • in those 3D images,

  • if developers design them in the right way

  • and know what environment you're standing in.

  • That's pretty wild.

  • You can also point your camera lens at a podium

  • and have an entire 3D image come to life in front of you,

  • grow up into the sky

  • and encompass the entire vertical area around you.

  • Now these Google Lens features

  • are all coming later this month

  • and as Google said on stage at the IO keynote,

  • they're coming to more than just pixel devices

  • and within the Assistant.

  • You'll also be able to access them in IOS

  • from within the Assistant itself.

  • But you have to use the Assistant, you won't be able

  • to access it from the Iphone's camera, of course.

  • For all the news and announcements from Google IO 2018,

  • check out TheVerge.com and subscribe to us on YouTube

  • at youtube.com/theverge.

- So we're here at day one of Google IO

Subtitles and vocabulary

Operation of videos Adjust the video here to display the subtitles

B1 US lens assistant camera smart phone clothing access

Google Lens new features hands-on

  • 34 1
    Samuel posted on 2018/05/10
Video vocabulary