Placeholder Image

Subtitles section Play video

  • [MUSIC PLAYING]

  • EWA MATEJSKA: Hi, everybody.

  • Thank you for joining us.

  • I'm Ewa Matejska.

  • I'm a technical program manager with the TensorFlow team.

  • JIHO CHOI: Hi, I'm Jiho Choi, a software engineer at Google.

  • And I am very excited to show you the new TensorFlow

  • profiler.

  • EWA MATEJSKA: Awesome.

  • So what is it?

  • Is it new this year?

  • JIHO CHOI: Yes.

  • This is something new we are releasing at this TensorFlow

  • Dev Summit.

  • So we will want to run our model faster, right?

  • And to make it faster, it is essential to collect

  • the performance data.

  • And this new TensorFlow profiler enables

  • you to profile your model and collect such performance data,

  • and provides you with a set of tools to find and fix

  • performance problems.

  • EWA MATEJSKA: What is it supported on?

  • JIHO CHOI: It supports CUP, GPU, and TPU.

  • EWA MATEJSKA: Oh, so everything.

  • Great.

  • I am excited to see it.

  • So how do I use it?

  • JIHO CHOI: So I prepared a quick demo

  • to show how to use the profiler.

  • EWA MATEJSKA: So I see a lot of code here.

  • What is this?

  • What model are you using?

  • What data set?

  • JIHO CHOI: Yeah.

  • So for this demo, we are going to use the MNIST data set.

  • And we're going to do some user data

  • preprocessing like data normalization,

  • shuffling, and patching.

  • And for the model itself, it's going

  • to be very simple sequential model with three layers.

  • So the data set and model are ready.

  • And you need to instrument your code

  • to actually use the profiler.

  • EWA MATEJSKA: So since you're instrumenting the code,

  • does that affect your runtime performance?

  • JIHO CHOI: No.

  • We measure the performance overhead.

  • And it is very minimal.

  • EWA MATEJSKA: Oh, fantastic.

  • JIHO CHOI: So, for this demo, we're

  • going to use the TensorBoard Keras callback.

  • And you can check out the other modes

  • in the menu we are just probably seeing at the TensorFlow web

  • site.

  • EWA MATEJSKA: I see you have sampling mode on the screen

  • there.

  • What is that?

  • JIHO CHOI: Oh, yeah.

  • So sampling mode is the-- it enables on-demand profiling.

  • So for example, for already running jobs,

  • you can attach to it and then monitor the performance.

  • It is very useful, for example, for the products and jobs.

  • EWA MATEJSKA: What is the effect on your runtime performance

  • when you turn on sampling mode?

  • Same question, in context.

  • JIHO CHOI: Again, it's the performance are all great.

  • It's really minimal.

  • EWA MATEJSKA: OK.

  • Excellent.

  • JIHO CHOI: So this is how you add Keras TensorBoard callback

  • to use the profiler.

  • We are creating the TensorBoard callback object and then pass

  • it to the model fit function.

  • EWA MATEJSKA: Does that mean I'll

  • be then able to see the data in the TensorBoard?

  • JIHO CHOI: Yes.

  • Yes.

  • EWA MATEJSKA: OK.

  • Excellent.

  • JIHO CHOI: The profiler is available

  • as the dynamic plug-in to the TensorBoard.

  • So let's try running the model.

  • Here, I prepare the same code as I showed in the presentation.

  • And we can try running it.

  • And for the demo, I am only running

  • this model for five epochs.

  • EWA MATEJSKA: So we don't have to wait too long.

  • JIHO CHOI: Right.

  • EWA MATEJSKA: Yes.

  • JIHO CHOI: Then we need to start the TensorBoard, the instance,

  • to actually-- to look at the profile.

  • EWA MATEJSKA: OK.

  • And you do that by-- how do you start the TensorBoard instance?

  • JIHO CHOI: So you need to--

  • I already installed TensorBoard in my laptop.

  • Or you can-- the guide in the TensorFlow web

  • site shows the instructions how to download and install

  • the TensorBoard.

  • So now, I started the TensorBoard instance locally.

  • So let's go there.

  • EWA MATEJSKA: Oh.

  • And this is the data?

  • JIHO CHOI: Yeah.

  • This is some other data from TensorBoard.

  • And to see the profiler, we need to go to the profiler plug-in.

  • And this takes you to the overview page.

  • And it shows the kind of summary of the performance

  • of your model.

  • It shows the average step time and the step time breakdown.

  • And also, it provides some suggestions for the next step.

  • And it is over-- in this case, it says,

  • your model is highly input-bound.

  • And more than 40% of the step time

  • is spent on processing input.

  • And also, the overview page recommends you

  • to look at these two tools--

  • Input Pipeline Analyzer and Trace Viewer,

  • to find exactly what is going on.

  • EWA MATEJSKA: It's very actionable.

  • Awesome.

  • JIHO CHOI: Yeah.

  • So let's actually go to the Input Pipeline Analyzer.

  • And here, you can see that the input processing

  • time is taking a lot on the data preprocessing.

  • And that this tool also tells you

  • how to optimize this problem, and points

  • to different resources.

  • EWA MATEJSKA: Is the Input Pipeline Analyzer

  • part of the profiling tool?

  • JIHO CHOI: Yes.

  • It is one of the tools that we are providing

  • inside that the profiler.

  • So now we know that the input pipeline is the problem.

  • Then we can go some other tool provided

  • by profiler, which is Trace Viewer, which

  • shows you the very low-level, the trace of your model

  • execution.

  • EWA MATEJSKA: And this is because we added the sampling

  • callbacks in the data?

  • JIHO CHOI: Yeah.

  • This is the collected because we added that TensorBoard Keras

  • callback.

  • So this tool points you to the different resources

  • on how to optimize your model.

  • And then following the suggestions,

  • we can optimize our data preprocessing.

  • And there are a few things we can do.

  • What I did here is I used the parallel map instead

  • of the sequential map.

  • And I also cast the data set.

  • And then I also used the prefetching

  • at the end of the data preprocessing.

  • So I prepared the optimized code here.

  • And let's run the code.

  • Now again, we are only running for five epochs.

  • Then we can go back to our TensorBoard instance.

  • And you need to reload the page to get

  • the data I just collected.

  • So now you can see that the step time is reduced from 2.7

  • millisecond to 2.1 millisecond.

  • And the overview pages says, now your program

  • is moderately input-bound.

  • It's only 10% input-bound.

  • So that's a huge improvement.

  • EWA MATEJSKA: So it's a significant improvement.

  • JIHO CHOI: Yeah.

  • EWA MATEJSKA: Yeah.

  • So it's nice that you can compare the data

  • in between the runs in this way.

  • JIHO CHOI: Yeah.

  • Everything you collected are going to show up

  • in this dropbox menu.

  • So you can compare your new run with the previous runs.

  • EWA MATEJSKA: Awesome.

  • Thank you for showing us the TF Profiler tool.

  • Where can I learn a little bit more about it?

  • JIHO CHOI: Yeah.

  • So this demo is available as collab in the first link.

  • And we're also publishing the guide

  • in the TensorFlow website.

  • And there are some related talks to the profiler in the Dev

  • Summit you can check.

  • EWA MATEJSKA: Thank you, very exciting.

  • And thank you for joining us.

  • And the links will be posted separately.

  • [MUSIC PLAYING]

[MUSIC PLAYING]

Subtitles and vocabulary

Click the word to look it up Click the word to find further inforamtion about it