Placeholder Image

Subtitles section Play video

  • [MUSIC PLAYING]

  • MAYA GUPTA: Hey, good morning.

  • I'm Maya from Google AI.

  • And I'm excited to share with you about TF Lattice.

  • This is the same technology that we

  • use at Google for many, dozens really,

  • of our production models, where we care about the model

  • behavior and we need to make sure

  • that we can guarantee it's working sensibly everywhere.

  • In particular, we're going to show

  • how you can control your models to have monotonicity.

  • So let me just dive into an example.

  • An example I'm going to use here is

  • a benchmark dataset from UCI, so you

  • could play with this at home.

  • And we have plenty of data here, 24,000 training examples.

  • And there's more features in the dataset.

  • We're just going to look at two, so that we

  • can visualize what's happening.

  • And what I'm showing here is the actual training data.

  • So the two features are, are you single or are you married?

  • And how many months has it been since you paid your bills?

  • We're trying to solve the regression

  • problem of predicting are you going

  • to default on your credit.

  • So what you can see is that while there's 24,000 samples,

  • there's not that many people who haven't paid their bills

  • for six months or more.

  • So we're starting to get a bit sparse in that part

  • of the region space.

  • And what you're seeing here is the training data,

  • in the mean and the confidence interval of the actual training

  • data.

  • So what happens when we go and train a model?

  • Sort of visualize what's going to happen.

  • Is a going to turn out to be important

  • if you're single or married?

  • Not really.

  • It doesn't matter if you're single or married,

  • it really matters how long it's been since you paid your bills.

  • But the big surprise here is the model

  • believes that it's better if you've not

  • paid your bills for seven months than six months

  • that you're going to get a higher credit score.

  • So this is sad.

  • This is bad AI behavior.

  • And this is one of the reasons that some people don't like AI.

  • And here we're just looking at this

  • in a two-dimensional space.

  • If we had the full 24 dimensional features

  • or 30 or 100 features, there's going

  • to be pockets of the space where this sort of thing

  • may be happening and we may not even

  • realize that we're getting this sort of bad, strange, possibly

  • unethical behavior from our model.

  • So what can we do?

  • Well, we say, well, this is overfitting.

  • Let's just regularize.

  • But any of your standard regularization techniques

  • are going to be problematic.

  • It's going to be hard to really fix this without hurting

  • your model accuracy.

  • And it's going to be hard to even check

  • if you really fixed it.

  • So what the TF Lattice package does

  • is it hits this problem exactly, and it

  • lets you have monotonicity as a regularizer.

  • You can say when you put together your features,

  • for this feature it should only hurt my model,

  • it should only hurt my score if I haven't paid

  • my bills for a longer time.

  • And so you can see here, this fixes that trend

  • in the high number of months.

  • And we get the same flexibility model.

  • We actually get slightly better test accuracy,

  • but we've solved the exact problem that we need to solve

  • and we now have a guarantee on what the model is doing.

  • And if we had 100 features, it would similarly

  • guarantee that in all those pockets

  • of 100 dimensional space, this was working correctly.

  • So how does TF Lattice package do it?

  • Well, under the hood, the kind of function class it's using

  • are lattices, and these are just interpolated lookup tables.

  • This is possibly the oldest way humanity

  • has for representing function, right?

  • You've seen these in the back of your math textbooks.

  • You can find them in tables from actuary in 1960s, et cetera.

  • So in a one-dimensional space, these are simply

  • piecewise linear functions.

  • But with the TF Lattice package, you

  • can represent high dimensional functions also

  • with these multidimensional lattices,

  • multidimensional lookup tables.

  • Here's an example with just two features.

  • We're building a function with 10 lookup table parameter

  • values there.

  • And the lookup table parameter values

  • are trained using empirical risk minimization.

  • It's all the same training that you would see with DNN.

  • It's just the parameters now represent what's

  • happening with our function.

  • And so it's much easier to control from monotonicity

  • because there's a lot of structure to the function

  • space.

  • And with the TF Lattice package, you

  • can kind of check and choose how much flexibility you want.

  • So on the one extreme, you can just make a linear model.

  • Very easy to make a linear model monotonic.

  • You can generalized additive models, where

  • you're using those 1D lattices.

  • You can do these multidimensional lattice

  • models.

  • If you have a lot of features, you

  • may want an ensemble of lattices.

  • And we've set this up with layers,

  • so you can mix and match and plug the players with other TF

  • layers and create sort of cascade of deep lattice

  • networks.

  • Everything in the package it gives you smoothness.

  • So compared to decision trees, you

  • won't have this sort of piecewise constant behavior.

  • You get smooth models and the monoticity

  • guarantees that you select.

  • And here's an example of five-layer deep lattice

  • network where these squares are these 1D calibraters.

  • And with the launch of TF 2.0, TF Lattice 2.0

  • will also be coming out in a month or two,

  • and we'll support Keras layers as well.

  • OK, so there's a GitHub link that you can get to.

  • And there's a nice set of tutorials

  • that walks through sort of all the complexity of what

  • you might want to do.

  • Also shows you how to like layer these things

  • on top of DNNs for later layers, et cetera.

  • The tutorials are sort of standalone.

  • You can just work with them and figure out

  • how to use this stuff.

  • If you want to dig into the technical details,

  • we have a series of papers.

  • I'll just point you to this most recent paper

  • and you can track back through some

  • of the literature from that.

  • All right, thank you very much.

  • [APPLAUSE]

  • [MUSIC PLAYING]

[MUSIC PLAYING]

Subtitles and vocabulary

Click the word to look it up Click the word to find further inforamtion about it