Placeholder Image

Subtitles section Play video

  • [MUSIC PLAYING]

  • DAVID VON DOLLEN: Hello, my name is David Von Dollen.

  • And I'm here today to talk about layer wise learning

  • for quantum neural networks.

  • I'm an Area Lead and Technical Manager

  • for Volkswagen Group of America working on a variety of topics,

  • including quantum computing and machine learning out

  • of our Advanced Technologies Group

  • based here in San Francisco.

  • So to start, I'd like to give you

  • a little bit of an intuition about quantum neural networks.

  • A quantum neural network is a type

  • of circuit, where we have a register of qubits

  • with which we load some data, whether it's

  • classical or quantum.

  • For our project we looked at classical data, specifically

  • the MNIST data set.

  • And then you apply a series of unitary gates.

  • Now these can be random rotation gates, namely rotation y, x,

  • or z, and a series of Control Z gates.

  • Finally, we apply a readout on one qubit.

  • And with this we calculate a gradient for our parameters

  • for our unitaries.

  • Now a known problem for quantum neural networks

  • is what's called the barren plateau problem.

  • And essentially what it identifies

  • is that as the depth of a quantum neural network grows,

  • the variance of the gradients in randomly initialized quantum

  • neural networks decay exponentially

  • as a function of the number of qubits.

  • And so, given this problem, we developed this technique,

  • layerwise learning.

  • And so when we look at our technique,

  • we address the vanishing gradient problem.

  • But we also looked at using this new library, TensorFlow

  • Quantum, to train and experimentally verify

  • our algorithm.

  • The great thing about TensorFlow Quantum

  • is that it handles all of our training overhead.

  • And we can focus on research, rather than coding and getting

  • deep into the internals.

  • So looking at this vanishing gradient problem,

  • we may utilize larger gradients in shallow quantum neural

  • networks.

  • We can avoid configurations and random initialization,

  • which may lead to a barren plateau problem

  • when we apply layerwise learning.

  • And we can successively grow our quantum neural network layer

  • by layer by training, freezing, applying

  • another layer, freezing, and then also training and freezing

  • batches of layers.

  • So when we look at this layerwise learning,

  • we can think about this phase of sweeping

  • over the network, where we look at the first layer.

  • We train parameters.

  • We freeze.

  • We train our second parameter, and we freeze and so on.

  • In our second phase, we sweep through,

  • and we freeze batches of layers.

  • And when we do this, we find a speed up

  • in regards to training times.

  • And we also see a performance gain in our test error.

  • So when we looked at doing binary classification

  • for the digit 6 and 9 from MNIST,

  • we noticed an advantage when using 10 epochs per layer

  • in doing layerwise learning over what we call complete depth

  • learning, where we train all of the layers at once.

  • So to talk a little bit about TensorFlow Quantum,

  • we can generate our quantum neural network layers really

  • easily by using sympy and cirq to construct our circuit.

  • And then we can inject that using

  • a TFQ parametrized quantum circuit

  • layer into TensorFlow Keras.

  • And we can use the TensorFlow Keras loss functions

  • and optimizers to train the gradients for our parameters

  • for our quantum neural network.

  • And if you're interested in more detail,

  • we have an upcoming white paper.

  • This has been a really great collaboration

  • between Volkswagen and Google.

  • And if you have any questions, please

  • feel free to reach out to us.

  • Thank you very much.

  • [MUSIC PLAYING]

[MUSIC PLAYING]

Subtitles and vocabulary

Click the word to look it up Click the word to find further inforamtion about it