Placeholder Image

Subtitles section Play video

  • some examples.

  • The first thing you need is data.

  • You may want to validate results or test ideas on a common public

  • dataset.

  • It includes a large and rapidly growing collection of datasets

  • you can get

  • started with easily and combined with tf.data it is simple to

  • wrap your own data too. Here is a small sample of the datasets

  • available and all of these and many nor are included there.

  • Then with Keras, you can express the

  • models just like you are used to thinking about it.

  • Standard package is fit with model fit and evaluate as well.

  • Since deep learning models are often

  • commutationally expensive you way want

  • to try scaling this across more than one device.

  • Starting from a pre-trained model or component also works

  • well to reduce some

  • of this computational cost. TensorFlow helps provide a large

  • collection of pretained components you

  • can include in your model and Feinstein

  • -- fine tune for your dataset.

  • Keras comes with everything you might need for a typical

  • training job.

  • Sometimes you need a bit more control. For example when you

  • are exploring new kinds of algorithms.

  • Let's say you wanted to build a custom

  • encoder for machine translation, here is

  • how you could do this by subclassing the model.

  • You can even customize it training loop

  • to get full control over the gradients

  • and optimization process.

  • While training models, whether packaged with Keras or more

  • complex ones, it is often valuable to understand the

  • progress and even analyze the muddle in detail.

  • TensorFlow board provides a lot of visualization to help with

  • chis and

  • comes full integration with Colab and

  • other Jupyter notebooks allowing you to

  • see the same visuals. All of these features are available in

  • TensorFlow 2.0 and I am really excited to announce

  • our alpha release is available for you as of today.

  • [Applause] Many of you in the room and

  • across the world really helped with lots of work to make this

  • possible. I would really like to take this moment to thank you

  • you all. Please give yourself a round of applause.

  • We really couldn't have done this

  • without you.

  • In addition to all the great improvements we talked about,

  • this

  • release comes with a converter script

  • and compatibility module to give you

  • access to the 1.X APIs. We are working for a full release over

  • the next quarter.

  • There is a lot of work going on to make TensorFlow 2.0 work well

  • for you. You can track the progress and provide

  • feedback on the TensorFlow GitHub projects page.

  • You asked for better documentation, and

  • we have worked to streamline our docs

  • for APIs, guides and tutorials. All of this material will be

  • available

  • today on the newly redesigned TensorFlow.org website where you

  • will find examples, documentation and tools to get

  • started. We are very excited about these changes and what's

  • to come. To tell you more about improvements in TensorFlow for

  • research and production, I would like to welcome Megan Kacholia

  • on stage. Thank you.

  • Thank you. Thanks Rajat. TensorFlow has always been a

  • platform for research to production.

  • We just saw how TensorFlow high-level APIs make it easy at

  • a get started and build models and now let's talk about

  • how it improves experimentation for research and let's you take

  • models from

  • research and production all the way through. We can see this in

  • paper publications

  • which are shown over the past few years in this chart.

  • Powerful experimentation begins and

  • really needs flexibility and this begins with eager execution

  • and TensorFlow 2.0 every Python command is immediately executed.

  • This means you can write your code in

  • the style you are used it without having to use session

  • realm. This makes a big difference in the realm of

  • debugging.

  • As you iterate through, you will want

  • to distribute your code on to GPUs and

  • TPUs and we have provided tf. function turning your eager code

  • into a graph function-by-function. You get

  • Python control flow, asserts and even print but can convert to a

  • graph any time you need to, including when you are ready to

  • move your model into production. Even with this, you will

  • continue to get great debugging.

  • Debugability is great not just in eager

  • but we have made improves in tf. function and graph. Because of

  • the mismatch inputs you get an error.

  • As you can see, we give information to

  • user about the file and line number where the error occurs.

  • We have made the error messages concise, easy to understand and

  • actionable. We hope you enjoy the changes and they make it

  • easier to progress with the models. Performance is another

  • area we know researchers as well as all users for that matter

  • care about. We have continued improving core performance in

  • TensorFlow. Since last year, we have sped up

  • training on eight Nvidia TeslaV100s by LLGS double.

  • With Intel and MKL acceleration we have gotten inference speed

  • up by almost three times. Performance will continue to be

  • a focus of TensorFlow 2.

  • 0 and a core part of our progress to final release.

  • TensorFlow also provides flexibility

  • and many add on libraries that expand and extend TensorFlow.

  • Some are extensions to make certain

  • problems easier like tf.text with Unicode.

  • It helps us explore how we can make machine learning model

  • safer by a tf.privacy. You will hear new announcements on

  • reinforcement learning and tomorrow we

  • will discuss the new tf. federated library.

  • It is being applied to real world applications as well.

  • Here are a few examples from researchers at Google where we

  • see it applied to areas like data centers and making them

  • more efficient. Our apps like Google Maps, the one in

  • the middle, which has a new feature

  • called global localization and combines street service.

  • And devices like the Google Pixel that use machine learning

  • to improve depth estimation to create better portrait mode

  • photos like the ones shown here. In order to make these real

  • world applications a reality, you must be able

  • to take models from research and prototyping to launch and

  • production. This has been a core strength and focus for

  • TensorFlow. Using TensorFlow you can deploy models on a number of

  • platforms shown here and models end up in a lot of places so we

  • want to make sure TensorFlow works across all these servers,

  • Cloud, mobile,

  • edge devices and Java and number of platforms.

  • We have products for these.

  • TensorFlow Extended is the end the end platform.

  • In orange, shown here, you can see the

  • libraries we have Open SourceSourced so far. We are

  • taking a step further and providing components built from

  • these libraries that make up an end-to-end platform. These are

  • the same components used internally in thousands of

  • production systems powering Google's most important

  • products. Components are only part of there story.

  • 2019 is the year we are putting it together and providing you

  • with an integrated end-to-end platform.

  • You can bring your own orchestrator.

  • Here is airflow or raw Kubernetes even.

  • Not matter what orchestrate you

  • chose, the items integrate with the metadata store.

  • This enables experiments, experimentation, experiment

  • tracking and model comparison and things I am sure you will be

  • excited about and will help you as you iterate through. We have

  • an end-to-end talk coming up from Clemens and his team and

  • they will

  • take you on a complete tour of

  • TensorFlow Extended to solve a real problem. TensorFlow Lite is

  • our solution for

  • running models on a mobile and IOt hardware.

  • On device models can be

  • reore responsive and keep users on device for privacy.

  • Google and partners like

  • iqiyi provide all sorts of things. TensorFlow Lite is about

  • performance.

  • You can deploy models to CPU, GPU, and even EdgeTPU.

  • By using the latest techniques and adding support for OpenGL

  • and metal on

  • GPUs and tuning performance on EdgeTPUs we are constantly

  • pushing the limits of what is possible. You should expect

  • greater enhancements in the year ahead.

  • We will hear details from Raziel and colleaguess coming up later.

  • JavaScript is the number one

  • programming laj language in the world

  • and until recently hasn't benefited from

  • all the machine learning tools. We released TensorFlow.js last

  • year.

  • Since then we have seen huge adoption in the JavaScript

  • community with more

  • than 300,000 downloads and a 100 contributors. We are just at the

  • beginning given how

  • big the JavaScript ecosystem is.

  • Today we are excited to announce TensorFlow.js 1.0.

  • We have a library of off the shell models for common ML

  • problems and we

  • have adding support for more platforms

  • where jafs runs and a huge focus in TensorFlow.js is on

  • performance improvements. Compared to last year, mobile

  • net

  • inference and browser is now nine times faster. You will

  • learn more about these advances in our talk later in the day.

  • Another language we are excited about is Swift.

  • It is reexamining what it means for performance and usability.

  • With a new programming model, it intends to bring further

  • usability. We are announcing Swift for TensorFlow is now at

  • 0.2 and ready for you to experiment with,

  • try out and we are really excited to be bringing this to

  • come community. In addition to telling you about

  • version 0.

  • 2 we are excited to announce Jeremy Howard of fast.ai is

  • writing a new course and Chris and Brennan will tell you a lot

  • more about this later today.

  • To recap everything we have shown you,

  • TensorFlow has grown to a full Eco

  • system from research to production from server to

  • mobile. This growth has been fuelled by there community and

  • wouldn't be possible without the community.

  • To talk about what we are planning for you and with you in

  • 2019, I will hand it over to Kemal.

  • [Applause] Thank you, Megan.

  • Hi, may name is Kemal and I am the product director. We are

  • celebrating the most important part of what we are building and

  • that is the community.

  • I love building developer platforms.

  • I used to be a developer and now I

  • enable developers by building a better platplatform. We turned

  • to the community and consulted with all of you on important

  • product decisions.

  • We received valuable feedback and couldn't have built 2.0

  • without you.

  • We created special interest groups, or

  • SIGs, like networking and TensorBoard to name a few.

  • SIGs are a great way to build the pieces of TensorFlow they

  • care the most about. We wanted to hear more about what you

  • were building and launched the power by TensorFlow campaign.

  • We were amazed by the creativity of the

  • products. After three years, our community is really thriving.

  • There are 70 machine learningGD Es

  • around the world, 1800 contributors and countless more

  • doing amazing work to help make TensorFlow successful.

  • On behalf of the whole TensorFlow team

  • we want to say a huge thank you. [Applause]

  • We have big plans for 2019 and I would like to make a few

  • announcements.

  • First, as our community grows, we welcome people who are new to

  • machine

  • learning and it is really important to provide them with

  • the best educational material. We are excited to announce two

  • new online courses. One is with deeplearning.ai and published in

  • the Coursera

  • platform and the other is with Udacity. The first batch of the

  • lessons is available right now and provide an awesome

  • introduction to TensorFlow for developers. They require no

  • prior knowledge to machine learning.

  • I highly encourage you to check them out.

  • Next, if you are a student for the very first time, you can

  • apply to the Google summer of code program and get to work

  • with the TensorFlow engineering team to

  • help build a part of TensorFlow.

  • I also talked about the power by TensorFlow campaign.

  • We are excited and decided to launch a 2.0 hackathon on

  • DevPost to let you share

  • your latest and greatest and win cool prizes. We are really

  • excited to see what you are going to build.

  • Finally, as our ecosystem grows, we are having a second day of

  • the summit, but we really wanted to do something more. We wanted

  • a place where you can share what you have been building on

  • TensorFlow.