Placeholder Image

Subtitles section Play video

  • thank you so much for coming to our session this morning.

  • I'm Sarah Sarah Jovin.

  • I'm on the Tensorflow Light team, and we work on bringing machine learning to mobile and small devices.

  • And later on I will introduce my colleague Andrew Sally, who will be doing the second half of this talk.

  • So the last couple of days have been really fun for me.

  • I've gotten to meet and speak with many off you, and it's been really nice to see the excitement around tensorflow light.

  • And today I'm happy to be here and talk to you about all the work that our team is doing to make machine.

  • Learning on small device is possible and easy, so in today's talk, we'll cover three areas.

  • First, we'll talk about why machine learning directly on device is important and how it's different than what you may do on the server.

  • Second will walk you through what we have built with tensorflow light and lastly will show you how you can use tensorflow light in your own APs.

  • So first, let's talk about devices for a bit.

  • What do we mean when we say a device while usually a mobile device basically our phones, so our phones are with us all the time.

  • We interact with them so many times during the day, and more than phones come with a large number off sensors on them, which give us really rich data about the physical world around us.

  • Another category of devices is what we call edge devices, and this industry has seen a huge explosion in the last few years.

  • To some examples are smart speakers smartwatches smart cameras?

  • And as this market has grown, we see that technology, which only used to be available on more expensive devices, is now available on far cheaper ones.

  • So now we're seeing that there is this massive growth and devices they're becoming increasingly capable, both mobile on edge.

  • And this is opening up many opportunities for novel applications for machine learning.

  • So I expect that many off you are already familiar with the basic idea of machine learning.

  • But for those that aren't, I'm going really quickly cover the core concept.

  • So let's start with an example off something that we may want to do.

  • Let's say classifications off images.

  • So how do we do this?

  • So in the past, what we would have done was to write a lot of rules that were hard ported, very specific about some specific characteristics that we expected to see in parts of the image.

  • This was time consuming, hard to do and frankly didn't work all that well.

  • And this is where machine learning comes in with machine learning, we learn based on examples.

  • So a simple way to think about machine learning is that we use algorithms to learn from data, and then we make predictions about similar data that has not been seen before.

  • So it's a two step process forced the mortal Lorne's, and then we use it to make predictions.

  • The process of mortal learning is what we typically called training, and when the model is making predictions about data is what we call inference.

  • This is a high level view or what's happening during training.

  • The model is passed in label data that is, input data, along with the associative prediction and since in this case we know what the right answer is, we're able to calculate the error that is, how many times is the model getting it wrong and by how much we used these errors to improve the model, and this process is repeated many, many times until we reach the point that we think that the model is good enough or that this is the best that we can do.

  • This involves a lot of steps in coordination, and that is why we need a framework to make this easier.

  • And this is very tense of Low comes in.

  • It's Google's framework for machine learning.

  • It makes it easy to train and build neural networks, and it is cross platform.

  • It works on CIB use GPS to abuse as well as mobile and embedded platforms, and the mobile and embedded piece of tensorflow, which we call tensorflow light, is what we're gonna be focusing on in our talk today.

  • So now we want to talk about why would you consider doing machine learning directly on device?

  • And there's several reasons that you may consider.

  • But probably the most important one is Leighton.

  • See if the processing is happening on the device, then you're not sending data back and forth to the server.

  • So if you're use case involves real time processing off data such as audio or video than it's quite likely that you would consider doing this.

  • Other reasons are that your processing can happen.

  • Even when your device is not connected to the Internet, the data stays on device.

  • This is really useful if you're working with sensitive user data, which you don't wantto put on servers.

  • It's more power efficient because your devices not spending power transmitting data back and forth.

  • And lastly, we're in a position to take advantage off all the sensor data that's already available and accessible on the device.

  • So this is all great.

  • But there's a catch like they're always is, And the catch is that doing on device ML is hard.

  • Many of these devices have some pretty tight constraints.

  • They have small batteries, tight memory and very little computation Power Tensorflow was built for processing on the server, and it wasn't a great fit for these use cases.

  • And that is the reason that we built tensorflow light.

  • It's a lightweight machine learning library for mobile and embedded platforms, so this is a high level or review of the system.

  • It consists of a converter where we convert models from Tensorflow format to tens of low light format and for efficiency reasons.

  • We use a format, which is different than it consists of an interpreter, which runs and device.

  • There are library off ops and cardinals, and then we have a B I's, which allow us to take advantage of hardware acceleration whenever it is available.

  • Tensorflow Light is cross platform, so it works on Android, IOS, Lennox and a high level devil upward workflow here would be to take a train tensorflow model converted to Tensorflow light format and then update your APS to use the Tensorflow light interpreter using the appropriate A P I on.

  • Iowa's developers also have the option off using corn amount instead.

  • And what they would do here is to take their train tensorflow model and converted to core ML using the Tensorflow Decorum Alcon border and then use the converted model with the core Am l wrong time.

  • So the two common questions that we get when we talk to developers of our tensorflow light is Is it small?

  • And is it fast?

  • So let's talk about the first question.

  • One of our fundamental design goals of tensorflow light was to keep the memory and binary size small, and I'm happy to say that the size off our core interpreter is only 75 kilobytes.

  • And when you include all the supported ops, the sizes 400 kilobytes.

  • So how did we do this?

  • So, first of all, we've been really careful about which dependencies we include.

  • Secondly, Tensorflow Light uses flat buffers, which are far more memory efficient than protocol.

  • Buffers are one other feature that I want to call out.

  • You're in tensorflow Light is what we call selective registration and that allows developers to only use the ops that their model needs and as they can keep the footprint small now moving on to the second question, which is off speed.

  • So we made several design choices throughout the system to enable fast start up, low laden see and high throughput.

  • So let's start with the mortal file format.

  • Tensorflow light users, flight buffers, like I said, and Flat Buffers is across Black Forum efficient serialization library.

  • It was originally created at Google for game development and is now being used for other performance sensitive applications.

  • The ad wanted refusing flat buffers is that we can directly access the data without doing parsing or UNP arcing off the large files which contained waits.

  • Another thing that we do it at the time of conversion is that we prefer use the activations and biases, and this leads to faster execution.

  • Later.

  • At Runtime, the Tensorflow Light interpreter uses a static memory and static execution plan.

  • This leads to faster load times, many off the colonel's that tons of low light comes red have been specially optimized to run fast on the on unarmed sea views.

  • Now let's talk about hardware acceleration.

  • As machine learning has grown in prominence, it has for quite a bit of innovation at the silicon Larry.

  • And many hardware companies are investing in building custom chips, which can accelerate neural network processing.

  • GPO's and GI ESPYs, which have been around for some time, are also now being increasingly used to do machine learning tasks.

  • Tensorflow Light was designed to take advantage of hardware acceleration, whether it is through GP.

  • Use d Espy's or custom ai ai chips on Android.

  • The recently released on Joy Neural Network A.

  • B I is an abstraction layer, which makes it easy for tensorflow light to take advantage of the underlying acceleration.

  • The way this works is that hardware renders right specialized drivers or custom acceleration code for their hardware platforms and integrate with the android and an FBI tensorflow.

  • Light, in turn, integrates with the Android and FBI via its internal delegation, a.

  • B I.

  • A point to note here is that developers only need to integrate their APS with tens of low light.

  • Tensorflow light will take care off, abstracting away the details off hardware acceleration from them.

  • In addition to the Android, n N a.

  • P.

  • I were also working on building direct GPU Acceleration in Tensorflow Light GPO's are widely available in use.

  • And, like I said before, they're now being increasingly used for doing machine learning tasks similar to an FBI.

  • Developers only integrate with tensorflow light if they want to take advantage of the GPU acceleration.

  • So the last bit on performance that I want to talk about this corn ization.

  • And this is a good example, often optimization, which cuts across several components in our system.

  • First of all, what is scorned ization?

  • A simple way to think about it is that it refers to techniques to store numbers and to perform calculations on numbers in formats that are more compact than 32 bed floating point representations and why is this important?

  • Well, for two reasons.

  • First, model size is a concern for small devices, so the smaller the model, the better it is.

  • Secondly, there are many processors which have specialized Cindy instruction sets, which process fixed point numbers much faster than their process floating point numbers.

  • So the next question here is how much accuracy do we lose if we're using eight beds or 16 bits instead off the 32 bits, which are used for representing floating point numbers?

  • Well, the answer obviously, depends on which model that we're using.

  • But in general, the learning process is robust to noise and want.

  • Ization can be taught off as a form of noise.

  • So what we find is that the accuracy is tend to be usually within acceptable threshold.

  • A simple way of doing want ization is to shrink the weights and biases after trading, and we are shortly going to be releasing a tool which developers can use to shrink the size of their models.

  • In addition to that, we have been actively working on doing quantum ization, a training time, and this is an active area off ongoing research, and what we find here is that we are able to get accuracy ese, which are comparable to the floating point models for architectures like Mobile that as well as inception.

  • And we recently released a tool which allows developers to use this and we're working on adding support for more models in this.

  • Okay, so I talked about a bunch of performance optimization Sze.

  • Now let's talk about what does it translate to in terms of numbers?

  • So we benchmark two models mobile.

  • Let an inception Vetri on the pixel two.

  • And as you can see here, we're getting speed ups of more than three times.

  • When we compare quantities models running on tensorflow light versus floating Point models running on tensorflow.

  • I'll point out here that these numbers do not include any hardware acceleration.

  • We've done some initial benchmarking with hardware acceleration and we see additional speed ups off 3 to 4 times with that, which is really promising and exciting.

  • So stay tuned in the next few months to hear more on that.

  • Now that I've talked about the design of tensorflow and performance, I want to show you what tensorflow light can do in practice.

  • Let's please roll the video so this is a simple demo application, which is running the mobile Let classification model, which we trained on common office objects.

  • And as you can see, it's doing a good job detecting them.

  • Even this tensorflow logo that we trained this mortal on.

  • Like I said, it's cross platform.

  • So it's running on Iowa's Isabella's Android, and we also are running it here on android pains.

  • This was a simple demo.

  • We have more exciting demos for you later on in the talk.

  • Now let's talk about production use cases.

  • I'm happy to say that we've been working with a partner.

  • Teams inside Google to bring tons of low life to Google maps.

  • So portrait mode on android camera.

  • Hey, Google and Google Assistant and Smart Reply are some features which are going to be powered by tensorflow light in the next few months.

  • Additionally, Tensorflow light is the machine learning engine, which is powering the custom model functionality in the newly announced ML kit.

  • And for those of you that may have missed the announcement, a market is a machine learning sdk.

  • It exposes both on device and cloud powered AP eyes for machine learning as well as the ability to bring your own custom models and use them.

  • These are some examples of facts that are already using tensorflow light were a milk it big start.

  • It's a really popular photo editing and collage making up and whisk.

  • Oh, it's a really cool photography app, so back to tensorflow light and what is currently supported.

  • So we have support for 50 commonly used operations which developers can use in their own moderns.

  • I will point out here that if you need an up which is not currently supported, you do have the option off using what we call a custom off and using that and later on in this talk, Andrew will show you how you can do that up.

  • Support is currently limited to inference.

  • We will be working on adding training support in the future.

  • We support several common popular open source models as well as the quantities counterparts for some of them.

  • And with this, I'm gonna and white my colleague Andrew to talk to you about how you can use tensorflow light in your own APS.

  • Thanks, Sarah.

  • So now that you know what tensorflow light is and what it could do and where it could be run.

  • I'm sure you know what you want to know how to use it so we can break that up into four important steps.

  • The 1st 1 and probably the most important is get a model you need to decide what you want to do.

  • It could be image classification.

  • It could be object detection.

  • Or it could be even speech recognition.

  • Whatever that model is, you need to train it yourself.

  • You can do that with tensorflow, just like you trained any other tensor full model.

  • Or you can download a pre train model if you're not ready to make your own model yet or if an existing model satisfies your needs.

  • Second, you need to convert your model from tensorflow too tense, a full light, and we'll show some examples of how to do that in a second third.

  • If there's any custom ops that you need to write, this could be because you want to spot optimize something with some special instructions you know about.

  • Or it could be that you're using a piece of functionality that we do not yet support, like a specialized piece of signal processing.

  • Whatever it might be, you can write your ops.

  • This may not be necessary.

  • Of course.

  • The last step is to write a nap, and you're gonna use whatever client a P I is appropriate for the target platform.

  • So let's dive into some code converting your tensorflow model.

  • Once you're done with intensive training, you typically have a save model or you might have a graph death.

  • What you need to do first is put this through the converter.

  • So here I'm showing how to do this within Python.

  • So if you download the normal tensorflow Tulane that's pre compiled, like the Pip, you're able to run the converter and it just takes the same model directory in or frozen graft F and it you specify a file name of what TIA flight file you want, and that will output a flat buffer that's on disk that you can now shift to whatever device you want.

  • Now how much you get it to the device.

  • You could put it into the package.

  • You could also say, distributed through a cloud service, where you can update your model on the fly without updating your core application.

  • Whatever you want to do is possible.

  • So next once you've converted.

  • Well, you might actually run into some issues doing conversion because, um, there's a couple of things that could go wrong.

  • So the 1st 1 is you need to make sure that you have a frozen graft F or save model.

  • Both of these are able to get rid of the parts of the graft that are used for training.

  • These air typically things like variable assignment variable initialization optimization passes these air not strictly necessary for doing inference that his prediction.

  • So you want to get rid of those out of the graph because we don't want to support those operations right now because we wanna have the smallest version of the runtime that can be distributed to keep your binary size small.

  • Ah, the second thing that you need to do is make sure that you write any custom operators that you need.

  • And now we'll go into a little bit of an example of doing that.

  • Well, before that, let me tell you one more thing, which is we also have some visualize is lets you understand the model that you've transformed and the transformation process.

  • So take a look at those they're linked off of the documentation.

  • So let's get into writing a custom op.

  • So what kind of op might we need?

  • Well, here I have an example that's a little bit silly, but it's to return pie.

  • So the important thing when you write an op is that you need to implement four C function.

  • So we have a C A p I for defining operations.

  • And the reason we do this is that, um all of our operations are implemented in this way so they can run on devices that only support see eventually.

  • But you can write colonels and seafood sauce.

  • In this case, what I'm doing is I'm ignoring the in pretenders and I'm putting an output tenser, which is emp.

  • I Now, if you had in pretences and you wanted to make a pretence, er you could also read the in pretences and then say Oh, multiplied by three and now have a multiplied by three operation.

  • This is gonna be application dependent.

  • And of course, as I said before, you don't always need to do this.

  • I'm just laying this out here to show that if there's some functionality that you need, we are extensible Okay, Once you've converted your model, you need to use the client.

  • A p I let me start with the Sea Post was a p I.

  • But we have other language bindings as well that I'll get to.

  • But in any of the bindings it's going to follow the same basic pattern.

  • The pattern is create an interpreter and load the model.

  • Fill in your data, execute the model and read back your data.

  • So it's very simple.

  • So in the sequel supposed FBI, the first thing you do is create a model object.

  • And this is given the file name off the the tensor full light file, and it creates an object that is gonna hold that model in m map it.

  • So, as I said before, we use flat buffers.

  • And the reason why is that?

  • We can Mm, the buffers.

  • Which means that there is zero Leighton see, to start running the model effectively.

  • Okay, Second, if you have any custom operations, you can register them.

  • So basically, at this phase, you're deciding which operations to include into your run time.

  • By default, we provide a built in opera's over that that includes all of our default operations.

  • You might also use selective registration that we alluded to before where you include on Lee, a subset of the operations.

  • In this case, you might provide a minimal resolver.

  • And if you wanted to use the custom operation that we had before you would create a custom resolver that would tell tensorflow light how to find your custom operation.

  • So now we know what our ops are and where to get the code for them, and we know our model.

  • Now we need to create an interpreter object.

  • So we take the pair of model and resolve er and put it together and it returns an interpreter.

  • This interpreter is gonna be our handle for doing our execution.

  • So the next step is we're going thio performer execution.

  • But before we can do that, we need to fill the buffer.

  • So if you have a model like a classification model, that is something that takes an image in.

  • Where you going to get that image?

  • Well, the obvious place you might get it would be from your devices storage if it's an image file name, but also commonly would be a camera, whatever it might be, you produce a like a buffer in this place.

  • It's gonna be a float Saarinen Star buffer and you fill it into our buffer.

  • And once you fill this buffer, you're ready to run.

  • So we fill their buffer tensor Full light has all the information it needs to run the execution and we just call invoke.

  • Now it's going to block until that execution is done and then we're gonna be able to read the output of it in an in an analogous way to our input.

  • So that is, we can get a float star buffer out which could represent the class numbers.

  • And then you're free to do with that data whatever you want.

  • So, for example, in an image classification app that we showed before you would read that index out, map it back to a string and then put it into your Gu Ys display.

  • Great.

  • So now we know how to use c++.

  • What if you're using another platform, for example, raspberry pi on rosary pi?

  • The most common thing to use is probably python, and again it's gonna follow the same basic pattern.

  • First we create an interpreter object.

  • The interpreter object is now our handle.

  • How do we feed data?

  • Well, since it's python, we can use numb pirates.

  • And this is really convenient because if you need to do pre processing your post processing, you can do it with the primitives that you're familiar with.

  • And this is kind of a theme that goes on that we want to keep our bindings as idiomatic is possible in the language that they are and also keep it performance.

  • So in this case, we put in some numb pyre A and we take out some dumb pirate.

  • So that's python.

  • What if you're writing an android app where you want to write an android things application, then you might use the job a p I So in this case, it's the same thing.

  • Take you, build an interpreter, give it the file name off the interpreter.

  • It might be from a resource if you're doing an android application, and then finally you're gonna fill the inputs in and call Ron.

  • So one of the things that we did for the job baby eyes that we know that many job of programmers don't really want to deal with building their own native library.

  • So in that case, you can just use our great oh file here, which will include our pre compiled version of tensor full life.

  • You don't have to download our source code on even for the tooling parts where you do the conversion from tensorflow detention of light, you can download the pre compiled version of of Tensorflow, as I alluded to before.

  • Great.

  • So, uh, what if you're doing IOS Well, in that case, you can use the sequel, so it's a B a p I.

  • You can also use the objective, see FBI, But again, we provide a pre compiled binary in the form of a cocoa pod.

  • Okay, so now that you know how to use sense of light, I want to tell you a little bit about what's gonna be coming up in tensorflow like one thing that we've been asked for a lot is adding more operations.

  • So the more operations that we had, the more models can be run from tensorflow out of the box.

  • The other thing that happens with machine learning that's often difficult is that researchers come up with new techniques all the time.

  • And that means that tensorflow is always adding operations.

  • That means that we're gonna continue to follow tensorflow as it adds important operations and add them into tensor full light as well.

  • Okay, the second thing we're gonna do is we're gonna improve the tooling, provide better documentation and tutorials, and try to focus on ease of use.

  • So it's really easy for you to understand on end to end examples how to integrate tensorflow life.

  • And the third thing which Sarah already mentioned but I'll mention again, is that we're excited about on device training on device training is really exciting because it allows us to refine a model based on a user's experience.

  • It allows us to decouple that experience from going to the cloud.

  • So if they're disconnected, weaken, continue improving the model.

  • So there's a lot of requests for this.

  • This, of course, will require more and more computation on the device.

  • But we're excited about upcoming upcoming harbor accelerators that will make this more and more possible.

  • Okay, one more question before we get into some exciting demos, when should I use sensible life?

  • So as we alluded to before, we're starting to use tensorflow light for our first party applications and 33rd party applications are also using it.

  • That means that what we're doing moving forward is that we're gonna make tense of whole light our standard solution for running ml on small devices and mobile devices.

  • Tensor flight of tensorflow light currently supports a subset of tensorflow ops.

  • This means that our recommendation is that you should use tensorflow light if you can and let us know okay, and lead us to know about any missing functionality you need.

  • It's not quite done.

  • You probably want to see our demos.

  • So with that, I want to show you a video off retrained models.

  • We showed you that tensorflow Gobi logo being recognized.

  • This is a common theme that we get, which is people like our pre trained examples like mobile net.

  • But they may not have an application where they need to tell the difference between five dog breeds and many zoo animals.

  • They might have an office an hour where they have markers and whiteboards.

  • And in fact, as we were testing the app, we found we had this issue too.

  • It's like we don't have the classes that are in these pre trained models.

  • So one of the great things that one of our other tents are full members.

  • Created was something called tensor full for poets, and there was a code lab about that, and it's available online as well.

  • And it basically allows you to take a pre train image model that has really good detector ability and put your own classes into it.

  • And I want to show you a demo app that we created that runs on the PC and creates tension.

  • Full line models for you.

  • So can we go to the video?

  • Okay, so we showed you before.

  • Can we recognize scissors and post it notes?

  • Well, let's try it out.

  • You always want to try these models.

  • Check.

  • Okay.

  • The scissors looks good, right?

  • Okay.

  • Great.

  • Post it notes also looks good.

  • But what if we had another object?

  • An object that's, you know, more common?

  • More important, like this metal tensorflow logo.

  • This happens in everyday life, right?

  • Okay, let's go take a look at how this does.

  • Well, it's labeled his other.

  • That's not very good.

  • But the great thing about machine learning is that we can fix this, and the way we fix it is we add data.

  • So we have our application.

  • We've gone to our training tab now and now we're gonna define a class called Tensorflow, and this is basically short for intensive full logo.

  • And now, from our web cam, we're gonna capture.

  • I'm gonna capture a couple of different perspectives as many as we can.

  • And ideally, you would take it on different backgrounds.

  • So it doesnt associate the background with it being a tensor full logo.

  • Then I clicked the train button, and right now it's using tensorflow to train the model, and you can see it's converging to a good validation accuracy.

  • It's gonna reload the model, and we're testing it intensive for light running on the PC right now.

  • And we see that it's recognizing tensor full correctly.

  • So it's that fast and easy, but also weaken.

  • Take that model and we could move to Android and IOS and used the exact same model and updated.

  • So thanks.

  • Now let's move to a live demo.

  • So I'm gonna go over here to the podium.

  • All right?

  • Okay.

  • So, classification.

  • What I just showed you is kind of this idea that you have an image in, and you put an image out and you put classifications out.

  • But what if you have multiple objects in the scene or you have something in the corner of an object?

  • You also want to know where in the where in the scene that object is, and that enters this model called single shot detection.

  • It's a type of model, and it turns out that our friends in Tensorflow research released a package called Object Detection as part of the tensor for models and that basically allows you to use their pre train model that recognizes many classes.

  • So what I've done is I want to load that onto a small device.

  • In this case, we talked.

  • We shown you a lot of things about mobile devices.

  • I wanna show you another small device.

  • This is a raspberry pi, So the raspberry pi is a really cool example of a device because it's very cheap and easy to get.

  • So any high school student could have one of these.

  • You can have many of these and just use them for a dedicated project.

  • But the other great thing about them is not only are they relatively powerful, but they're also able to interface with other hard work.

  • They have g p I opens and this this gonna be capitalized in a number of different ways.

  • But one way is to run Lennox, and that's what we're doing here.

  • But you can also use android things, which you can see.

  • One of the cool sand of the sandbox has many examples doing that.

  • So you could also do this with android things.

  • So in this case, I have the system, the system board right here, and it's connected to a motor controller.

  • And this motor controller is just a microcontroller that interfaces to servo motors.

  • The server motors can go up and down, left and right, and they can basically aim the camera.

  • So now what we're gonna do is we're gonna load our object detection model onto this device, and we're gonna actually run it in order to recognize different objects.

  • So let's go to the demo feed, please.

  • And we can see my app.

  • You can tell by the beautiful nature of this app that I'm not a great app developer, but this is what I can do on a weekend.

  • So give me a little bit of slack.

  • Okay, so here, if we hold up the apple it's recognizing the apple, and it's telling us, you know what probability and where the object is now that's all good and fine.

  • But when we couple it with the ability to move, where I'm gonna turn on the motors now and now I'm gonna bring back the apple and what I'm gonna do is I'm gonna move the apple in in the screen and it's going to try to keep it centered.

  • So as I move this apple, it's basically going to try to keep it centered.

  • So it's like a little virtual camera person.

  • And this works on other objects, like this banana here.

  • Hopefully Oh, there we go and it's gonna keep that center.

  • And if you put two objects in the in the screen, it's going to try to keep them both in.

  • Okay, so get a little bit of false detection, but basically it's gonna try to keep them both center.

  • So this is really ah, fun application, and I bet you can come up with many different applications that are really exciting to do with this type of application.

  • So can we go back to the slides again?

  • You know, So I like I said This is basically what I could do in a weekend.

  • But I imagine great app developers and people with a lot of creativity about connecting devices and connecting software could do many interesting things.

  • So what I want to do now is I want to tell you in summary tensor full light, you've seen a lot about it.

  • Basically, we feel it.

  • Tensor flight makes on device ml small, fast and easy.

  • And we think that you're gonna find it very useful in all of your applications on dhe.

  • We're excited to see what you build.

  • Come talk to us.

  • I'm gonna be in office hours at 12.

  • 30.

  • You can come talk to me.

  • In addition, you can come to our sandbox if you haven't already.

  • And we have, of course, the examples that I showed you here we have the tracking camera.

  • We also have the object classification on mobile advice.

  • But another cool thing that we have is the donkey cars.

  • And this was done by a group outside of Google and we converted them overto tensor full light.

  • And we are excited to say that their application works really well with tensor flight as well So with that, I hope you check these things out.

  • Um, I want to tell you that if you want to get started, you can go to our documentation page.

  • You can go to the tensorflow dot or page, and there is a T f light page where you can find more information about it.

  • In addition, our code is all open source.

  • You can get it on, get hub.

  • You can download it, modify it, submit a pull request and, of course, file any issues that you have while using it.

  • In addition, if you want to talk about sensible life, talk about your applications.

  • Ask us about feature requests.

  • Please send to our mailing list.

  • This community is really exciting.

  • We found that an open source in tensorflow.

  • We got a lot of excitement.

  • We got a lot of interest, and we made it a much better piece of software to use forever in both people inside Google and outside.

  • Cool.

  • And we hope that you'll engage sensor full light in the same way that tensor full has been engaged.

  • So with that, I want to thank you for your attention for coming toe.

  • I owe for listening to our talk about tensorflow light.

  • I also want to thank you.

  • Thank our Google Partners.

  • This product didn't come out of isolation.

  • It came from all of our experience building mobile apse of machine intelligence a cz we gain experience.

  • We found that there was a common need and that was the genesis of tensor full light.

  • So all of our partners provided application provided feedback, even provided code and help with with our models.

  • So thank you so much to you and to them and enjoy the rest of Io.

thank you so much for coming to our session this morning.

Subtitles and vocabulary

Click the word to look it up Click the word to find further inforamtion about it