Subtitles section Play video Print subtitles [MUSIC PLAYING] KEVIN SATZINGER: Thank you. My name is Kevin Satzinger, and I'm here today to share with you our latest results on Quantum Supremacy: Benchmarking the Sycamore Processor. The promise of quantum computing is that it can solve certain useful problems that are simply beyond the reach of classical computing. And most of those applications are here in the useful error corrected machine, which is still years of research in the future. But we also hope that in the near term, in this blue region, we will be able to find useful applications. But, of course, before a quantum computer can do something useful that is intractable for classical computing, it must first do anything at all that is intractable for a classical computer. And so a real question that has been on our minds in this whole industry for the last decade is, can we cross this line? And this is something that our group has been focusing on for the past several years. It's something that was given a name by John Preskill in 2012. He called it quantum supremacy, to perform tasks with controlled quantum systems going beyond what can be accomplished with ordinary digital computers. He went on to ask, is controlling large-scale quantum systems merely really, really hard or is it ridiculously hard? Well, I am pleased to say that it is merely really, really hard. And we demonstrated this with the paper that we published last fall where we showed crossing this line for the first time. And looking at this slide, I see there's a lot of words on there. So maybe a picture would be a better way to reflect this. And I want to emphasize that this is a huge team effort. And I'm very thankful to each and every person, each and every member of this team who contributed to this work. And I'm honored to be here today representing them. You can find our paper on Nature. It's open access. And also at this archive link, we have updated versions of our supplementary information. At the centerpiece of this paper was a new processor called Sycamore. And it's positioned here right at this boundary between classically simulatable and beyond classically simulatable. And that's what I'm going to be presenting about today. First, the march from graduate school in the upper left toward industrial technology that can take on a supercomputer. And then how that plays into this research direction towards useful error corrected machine. For there is this talk, I'll be following this outline. So we'll look first at the Sycamore processor. Second, how to calibrate and benchmark Sycamore. And finally, quantum supremacy itself. So let's get started with Sycamore. And before we get to Sycamore, I want to flash back a few years to what the state of the art was in 2015. This is a chip from the Santa Barbara group. It's about a centimeter in size. And it has nine transmon qubits in a linear array. And what I'd like you to observe is that the control wiring takes up about half of the area of this chip. And when you look at this device, it is not at all obvious how we could scale it up to a two-dimensional array with 50-plus qubits. It's not a matter of copy/paste. We need to really re-engineer the system in order to make it scalable. And one of the key technologies that made it possible to make such a two-dimensional array was moving to a scalable flip-chip design, where we have two chips instead of one. So one chip, this top chip, will be solely responsible for a two-dimensional array of qubits. And the other chip will take care of all the readout and all the control wiring. And by dividing the responsibilities like this, we'll be able to have a more scalable design. One of the key technologies that makes this possible is these indium bumps, which provide a superconducting interconnect and a mechanical connection between the two chips. This is a photograph showing a small processor prototype that demonstrated this flip-chip technology. So there are four chips in this photograph. At the bottom is a qubit chip that has a small 2 by 3 array of superconducting qubits in the center. And the rest of the chip is just covered in those indium bumps. In the center is a separate chip that's responsible for readout and control and interfacing with the outside world. And what we do is we take the qubit chip and flip it over on top of the control chip, align them, and press them together. And that completed assembly is what's at the top of the photograph, where we have two chips stacked together and ready to use. Now, this is how you can make a quantum processor. But one of the lessons that I want to share with you is that there's a lot of hardware infrastructure that goes into making one of these experiments actually work. So let me share with you a couple of highlights. One is packaging. This is basically anything that goes between the quantum processor and our dilution refrigerator. And in our case, we have the processor embedded in a circuit board with superconducting metal. And the circuit board interfaces with about 150 coaxial connectors around the perimeter. We also encase the processor an electromagnetic shielding to protect it from the outside world. We then take this package and bolt it to the bottom of a dilution refrigerator, which will cool it down to about 10 millikelvin and also is responsible for delivering signals through a couple hundred coaxial cables that go from room temperature down into the device. Another key piece of hardware infrastructure to get these experiments to work is our electronics. These are room temperature electronics that generate the control signals for our processors. And we use custom scalable electronics. Here's an example of one of our boards. In the center, is a field programmable gate array that's responsible for driving eight digital-to-analog converters circled around it, which then drive these eight channels. These can output arbitrary waveforms from 0 to 400 megahertz. And we can also up-convert those with mixer to microwave frequencies like around 6 gigahertz. This is just one card. We can have many of these cards in a crate, like depicted here. And then several of these crates in racks working together in concert to control one of our processors. Now I'd like to turn to Sycamore itself. One of the key advances of Sycamore is that it's a new tunable coupling architecture. This is a new feature where we're able to turn the qubit interactions on and off at will. So the qubits can be independent of each other most of the time. But then when we want them to interact and get entanglement, we can turn on the coupling for a brief period of time. This, it turns out, was immensely helpful to making the full system work and was really a key breakthrough in order to get this processor to perform. We did this with a scalable two-dimensional architecture, where we introduce an extra coupler transmon between each pair of qubits where qubit itself is also a transmon. And this is depicted in this device schematic here, where we have 54 qubits and 88 couplers, one between each pair of qubits. I want to share with you a little bit of data showing how these couplers really work. So let's look at a simple experiment where we have two qubits next to each other at the same frequency. What we're going to do is excite one qubit and then have the two qubits interact for a period of time, the vertical axis, subject to a certain coupler bias, the horizontal axis. And let's look at the center first. So, in the center here, the coupler is at its maximum frequency. And so there is a few megahertz of coupling between the two qubits. And what happens is this photon leisurely swaps back and forth between the two qubits. That's what these oscillations are. But as we march to the right, as the couplers' frequency comes down, there is this divergence where there is no more swapping. This is where the coupling is turned off and the two qubits can act independently. This is a very valuable place. This is where we operate ordinarily. But sometimes you want the qubits to interact. And to do that, we'll post the coupler a little bit further to the right so that we have very strong coupling for a brief period of time between the two qubits. I'll end this section with this nice photograph of our Sycamore processor. And a nice symmetry that I could highlight is that the center chip here with the qubits is about a centimeter in size, which is the same size of the chip we looked at the beginning of this section. So now let's move on to calibration and benchmarking. Suppose that I handed you all of these electronics-- the fridge and cables, the packaging and the processor. It is not a trivial matter to turn all of that stuff into a quantum computer. And calibration is the process of learning how to execute quantum logic with our hardware so that we can go from all of our stuff to a system that can run an arbitrary quantum algorithm in the same way that you could play music on a finely tuned piano. But this is not a trivial task. There are around 100 parameters for each qubit. We need to choose a frequency for every qubit and coupler. And there are strong interactions between those frequencies and biases. And then we need to tune up all of the gates and readout for each qubit and pair of qubits. Now, if you have just a few qubits, you can park a graduate student in front of a computer for a couple of days, and they'll be able to work it out. But if you have 5o-plus qubits, you need a much more scalable solution. And in order to solve this problem, we encode our calibration routines into a graph. And this allows us to solve the problem using standard graph traversal algorithms. Pictured here is an example calibration sequence for two qubits. And this network is really a graph distilling decades of research from groups all around the world learning how to make these processors work. In this graph, each node represents a physics experiment, where we acquired data, and then the analysis that we use in order to figure out what the data says and decide what to do next. This is an example for two qubits, but there are literally thousands of these nodes when we want to calibrate Sycamore. So it's crucial that we use algorithms to work through this graph, to calibrate the device, and then maintain the calibrations. To give a flavor of how this works, we start on the left learning some device parameters. And as we work to the right, we iterate back and forth between single-qubit gates and readout until eventually we get around to two-qubit gates at the very end. A key step in setting up our device is choosing a frequency for each of the qubits. And we're going to kind of follow a two-step program here. First, we're going to measure the qubit lifetime as a function of frequency.