Placeholder Image

Subtitles section Play video

  • [MUSIC PLAYING]

  • GAL OSHRI: Hi, everyone.

  • My name is Gal Oshri, and I'm here

  • to talk to you about TensorBoard and what's

  • new in TensorFlow's visualization toolkit.

  • So how many you have used TensorBoard before?

  • Awesome.

  • So as you know, TensorBoard helps

  • you visualize a lot of the data that comes out of TensorFlow.

  • It has a wide variety of features,

  • but today we're going to talk about a couple

  • of new additions.

  • So I'm actually going to switch over to a demo.

  • So this is Colab, a service from Google Research that

  • makes it really easy to get started with TensorFlow.

  • If you've seen the TensorFlow documentation,

  • you've probably used Colab.

  • A few minutes before the demo, I made sure

  • that we have installed TensorFlow 2.0

  • Alpha and a couple of other setup steps,

  • so we don't have to do them right now.

  • We're going to use the FashionMNIST data

  • set, one that I'm sure you've never seen before.

  • And we're going to train a really simple keras sequential

  • model, one that just has a few layers, including

  • dents and dropout layers.

  • We're going to train it with a fit API

  • and give it the TensorBoard callback so that we

  • make sure that we log the right data to visualize

  • in TensorBoard.

  • Now, if you've used Colab before,

  • you'll know that at this stage, we

  • would need to download the logs to our local machine,

  • make sure TensorBoard is set up, point it at those logs,

  • and then look at them, and then go back to Colab again.

  • That's not very convenient.

  • So what we've done is enabled showing TensorBoard directly

  • within Colab.

  • You'll notice that the way that we start TensorBoard here

  • is exactly the same as in the command line.

  • It's the same command, just has the magic function in front

  • of it.

  • The same thing will also work in Jupyter Notebooks.

  • So let's see what's different.

  • Well, first of all, when using the keras callback,

  • we now have train validation showing up on the same charts

  • to make it much easier to compare them in accuracy, loss,

  • and other metrics.

  • This makes it easier to detect things like over-fitting.

  • In the graphs dashboard, while seeing

  • all of the ops and auxiliary information

  • is useful for many scenarios, sometimes you just

  • want to see what is the model that I created in keras.

  • What are the layers in it?

  • So you can now switch to the keras tag

  • and just view that model.

  • We can expand this to see the actual layers that we added.

  • There's several other APIs for using TensorBoard

  • within Notebooks that let you change the height of the cell

  • as well as list the active instances

  • within your Colab notebook.

  • But we're not going to look at that today

  • because I want to switch gears and talk about hyperparameter

  • tuning.

  • Now the model that we created was extremely simple,

  • and we've paid a couple of hyperparameters for it--

  • the dropout rate, the number of units in the dense layer,

  • and the optimizer.

  • Now, if we really cared about that model's performance,

  • we're going to want to try out different values.

  • We're going to want to experiment several of them

  • and see which one leads to the best model.

  • The way this would look today in TensorBoard

  • is that you might include all of that information

  • about what the values are into the run names,

  • so as you can see here in the bottom left.

  • You can then filter using regular expressions

  • or go to the chart and try and hover over

  • which line had the best result to identify

  • which values were good.

  • It's not the best experience, so let's see

  • if we can do something better.

  • Now, what I'm going to show next is

  • something that's going to change,

  • both in terms of the APIs and the UI.

  • But it is available in the TF 2.0 Alpha

  • today, so you can try it out.

  • We're going to do several additional imports

  • and define which values of the hyperparameters we want to try.

  • We'll start out with just a few so that we don't take up

  • too much time during the demo.

  • We're going to log a summary that tells TensorBoard what

  • were the hyperparameters that we care about

  • and what were the metrics.

  • We then wrap the existing training code

  • that we had, just to make sure that we

  • log the accuracy at the end on the validation set

  • and also tell TensorBoard that the experiment

  • has started and finished.

  • This time, we're going to start TensorBoard

  • before doing our training because, in most cases,

  • your training will take longer than one minute

  • and you want to view the TensorBoard while your model is

  • training to understand its progress.

  • So we've started it.

  • It has no data.

  • But once a couple of epochs of training have finished,

  • we can refresh it and start to see something.

  • Now you'll notice in the top, we now

  • have the HPARAMS dashboard, which shows us,

  • at first, a table where each run is represented by a row,

  • and we have columns for each of the hyperparameters

  • and metrics.

  • As the runs finish, the table will become populated

  • with them.

  • On the left, we have the ability to filter and sort.

  • So we can say that we don't actually

  • care about the number of units or we only

  • want to see experiments where the accuracy is at least 85.

  • So before we proceed further, I want

  • to actually cheat a little bit and log and access

  • some completed experiments where we've

  • run a wider range of combinations of values

  • for the hyperparameters.

  • Now while this is loading, I want

  • to point out that I'm pointing TensorBoard directly

  • at a folder in my Google Drive.

  • So I had all my TensorBoard logs maybe

  • on another machine, uploaded them to my Google Drive,

  • and then I can access them directly

  • within my Colab Notebook.

  • So this takes a moment to load, but hopefully

  • when I refresh we now see it.

  • And I can switch over to the HPARAMS dashboard

  • and now see a more complete set of experiments.

  • I can switch over to the Parallel Coordinates

  • View, which shows me a visualization

  • where we have an axis for each hyperparameter and each metric.

  • Each run is represented by a line that

  • passes through all these axes at the points

  • corresponding to its hyperparameters and values.

  • I can click and drag on any axis to select a particular range.

  • So in this case, I've selected the experiments

  • with a relatively high accuracy, and they become highlighted

  • in this visualization.

  • I can immediately see that all these experiments

  • used the Adam Optimizer as opposed to SGD

  • and had a relatively high number of units in the dense layer.

  • This gives me some great ideas about what

  • I can experiment with next.

  • I can also view the scatterplot view,

  • which shows me the correlations between the different

  • hyperparameters and metrics.

  • I can, again, select a region here

  • to view those points across the other charts.

  • So just to summarize, we've looked at TensorBoard

  • in Colab, an easier way to compare

  • the train validation runs, visualizing

  • the keras conceptual graph, and better hyperparameter

  • tuning with the HPARAMS dashboard.

  • All of this information is available as documentation

  • in TensorFlow.org/TensorBoard.

  • We also have a demonstration upstairs,

  • and we'd love to hear from you.

  • [MUSIC PLAYING]

[MUSIC PLAYING]

Subtitles and vocabulary

Click the word to look it up Click the word to find further inforamtion about it