The Quantum Advantage with Dr Krysta Svore - podcast episode cover

The Quantum Advantage with Dr Krysta Svore

Jan 16, 202530 minEp. 980
--:--
--:--
Listen in podcast apps:

Episode description

Is it time for Quantum? The Quantum Ready program is to help businesses and leaders prep for the new era of reliable quantum computing! Scot talks to Dr Krysta Svore, Distinguished Researcher who leads the Microsoft Quantum group about all things quantum, and how you and I can access insights and resources via online skilling, in person workshops and industry specific forums!

Transcript

you Hi, I'm Scott Hansel, and this is another episode of Hansel Minutes. Today, I'm talking with Dr. Christos Foray. She is an American computer scientist who specializes in quantum computing, technical fellow and vice president of advanced quantum development for Microsoft Azure. Quantum. I had you on the show a number of years ago trying to get my head around quantum computing. I'm looking here in the show and it looks like it was...

Gosh, we're on episode 980-something, and you were episode 600. So it's been, gosh, six years since the last time I've talked to you. Did you dedicate your entire career to quantum? How long have you been in this space? Yeah, definitely. It's been a life, you know, a life's pursuit and definitely a real dedication of my career. I did spend some time. Early in my days at Microsoft, looking at machine learning and helped deploy some of the earliest algorithms behind, for example, what's now Bing.

And, you know, of all things now that that's, you know, that's really contributed also to what we're doing in the quantum computing space where we're thinking about how to bring and integrate together AI, right? AI machine learning with quantum computing. Most of my career is very much focused on quantum computing and started looking in this space in the late 90s, early 2000s.

Oh, wow. It sounds amazing and futuristic to say you work in quantum computing in 2024. It must have been crazy to mention that in the 90s. Yeah, absolutely. It was not the advised path at that time, right, to go into quantum computing. But, you know, really, it was such a great opportunity to... really develop some of the foundational thinking and the early frameworks for how we think about a quantum computer. What does it mean to program, control, error correct?

a quantum computer right how do you keep it reliable how do you keep it noise free how do you send an application to it right um we we worked on many of those uh many of those frameworks very early right in the very early 2000s and now bringing them to life right we really are working on taking theory uh at the time very much working on theory and now to practice right from theory to practice is

is really our goal. Make quantum computing real and make it useful. How long was it conceptual and when was the first time you saw one or entered a room? When did it become... real for you? Was there a moment or a date or time that happened for you? Yeah, I had the, you know, I had the privilege and honor of working with Ike Chuang at MIT during part of my PhD work, where there he was building an ion trap quantum machine.

You know, it was really much earlier days then, right? Fewer qubits, you know, working to overcome challenges and just keeping those qubits alive and keeping them. you know, of high fidelity. You know, but now fast forward, right? Fast forward to today where we are working on enabling noise-free, very reliable, very, very reliable qubits to do computation, to do deeper computation that we can't do.

on a classical computer. And so now those machines have evolved. We work with a variety of partners, including Atom Computing. We are co-building a machine right now with Atom Computing, which is a different type of qubit technology based on what we call neutral atoms. But we are also partnered, you know, and working closely with Continuum, right, to show that their machines also, right, that those machines can can light up quantum computation.

as well. So really working towards what we call reliable quantum computing, right, showing that we can get good answers from quantum computers, but critically, right, for problems, right, we want to show it for problems that we can't solve with classical computers alone. We talked before that we're not going to see quantum computers on our wrists. This is not that kind of a scenario, right? Correct. I think you would use the analogy that there's like...

GPUs serve a purpose and FPGAs serve a purpose. And now we're seeing NPUs. This is a kind of a complementary technology as opposed to a replacement technology. Absolutely. So when we think about the applications here, and really the next generation of applications in general, it's really about hybrid. You want to use the best processing units. best processing capabilities across many types of processing, right? Whether that's GPU, CPU.

you know, QPU, right? So quantum processing unit. So we want to use the quantum computer alongside, integrated with, right? Your cloud compute, your high-performance computing. and integrated with, for example, your AI capabilities. And so the key is to have integration across. You really think about these applications as hybrid in the sense that you run, for example,

some amount of, say, cloud compute, right? High-performance compute processing. Then you send a portion of like a subroutine almost, right, to the quantum computer. You get back an answer to higher accuracy or solution. you can't get from the classical computer, and then you incorporate that.

into your computation pipeline there, right? So you might incorporate that back as data that you use to train an AI model, right? So it's really using the quantum computer for what it's really good at. One example, right? It's really good.

it quantum mechanics. This is a quantum computer. The name quantum is there for a reason. It is a quantum mechanical machine, right? It's using the principles of quantum mechanics to store information and process that information, which gives us access to things we

we can't do on classical computers alone. So we want to use it for what it's really good at. It tends to be, you know, really the high priority applications there in chemistry and material science. We want to understand how do electrons interact with each other. When they're strongly correlated, how are the electrons moving around? Where are they located? We think about molecular structures. Where are the electrons in those molecular structures?

turns out to get really accurate information about those electrons, right? It's really good to have a quantum computer. So those are the types of problems we think about. using the quantum computer for, but in a broader pipeline, right, where you're also doing a lot of classical compute to understand that molecule, to understand maybe the catalytic chemical reaction that you're looking at. Then you use the quantum computer for a part of it and then use the information.

information you get from the quantum computer, this data, right? Use that data to then train an AI model that can be more accurate and faster than the AI models we have today. And then that AI model might run on a GPU and the process continues. Exactly. Each processing unit to its own specialized needs, one specialized technique. Exactly. Interesting, interesting. The term bit, though, the term quantum bit is...

Is it the right term? Because the reason I ask this is that I've just been listening to the history of Microsoft. There's a really interesting six-hour podcast called Acquired, and I'm about four hours into it. And we go through... 8-bit, 16-bit, 32. And that's all referring to width on the bus and the size of a unit of a bit of data. And quantum bits... Is that more of a processing thing more than it's a data size thing?

Well, you know, I think you can think about quantum registers as well. And how would you size those registers? I mean, analogous to a classical computer. You know, really, we think of, you know, just at the at that kind of lowest level of the machine. You have this bit of information. And so it is analogous to a classical bit. It has different properties. But this bit really is, you know, think of it.

like your classical bit, right? So at the lowest level, we are operating on quantum bits. These quantum bits don't just take the state zero or the state one, right? They're not just... simple binary in that sense, like we think about classically, they can also take what we call a superposition of states. So it can be in a, you know, you can think of this as some combination of zero and one.

And so that's one of these properties, this superposition, quantum superposition. This quantum bit can take on that state. And so when you bring many together, right, you can get a large quantum superposition. We can then bring, you know, multiple together and we can do what we call quantum entanglement and quantum interference.

Really, the whole specialization of the quantum computer is that you're taking advantage of these quantum mechanical properties to do your computation, to store the information, do the computation, and perform that processing. the art is to design quantum algorithms, right, applications that can take advantage of this kind of set of operations, right? These are

different types of operations than what we can do classically. So it's kind of the art of orchestrating these qubits to do things we can't do classically. And part of that we think when we design design these applications, we want to look for cases where we don't need a lot of input. We don't want to send a lot of input to our quantum computer, and we also don't want to ask a lot of output from the quantum computer. So think of

Low input, low output problems where you need a lot of compute in that space. So those are the ideal things to look at for running on a quantum computer. where ideally you then can't get a solution classically for it as well. Right. Now we are in a classical computing era. We're in the 64-bit kind of desktop era. And people have asked me as a...

I've taught at the community college level, like, why don't we have 128-bit computers? Why don't we have 256-bit computers? Like, why doesn't data width get bigger and bigger? In the quantum space, I understand that Microsoft and Atom Computing used 80. physical qubits to do a an algorithm over 20 logical qubits so this width gets bigger and bigger and then there's discussion of large-scale problems might require millions of qubits as we're getting wider and wider in the quantum space

Why is that the goal and why is that necessary? While in the classical computing environment, we've been 64-bit for a while and it seems like we're going to be 64-bit for the near future. Yeah, so when we're talking about the number of qubits here, we're thinking about, you know, think of it as the input to the quantum algorithm, kind of the space you're using.

Right. So so it's it's it's really like how much, you know, how much space do I use to store the input to my problem? I look at that instance and then also how much also additional scratch space I might need. to do the calculation, right? So when we say you need a million physical qubits to do a problem,

That's looking at really what does this algorithm cost in terms of the input size required, the storage space required, and then the space required to do compute, right? You might need extra scratch space, right, in classical computing. We assign extra variables that enable us to store information. You write your algorithms and look at how you might do addition and other things. You need extra space to do calculations.

And so here we look at, you know, how much space is going to be needed to do the actual computation and how much space is needed to process that input, right, to get the computation started. So when we say you need a million physical qubits, that computation is going to need that much space to store and process the information.

Gotcha. So that's where that analogy of the data path kind of breaks down. And if I try to draw a one-to-one relationship between classical computing and quantum computing, it clearly doesn't make any sense. So it makes me feel like... quantum computing is almost in its serverless kind of an era where it's like, I have a problem and I'm going to.

rent some qubits by using Azure Quantum. And this problem is a small scale problem and I'll need 10 qubits. But this large scale problem, I'm going to have to have a real conversation about how I'm going to make that happen when my physical system from Microsoft and Atom. has 256 cubits, or maybe in the future we'll have a thousand.

Yeah. So indeed, you know, classically, we also talk about the space and time requirements for an algorithm for a computation. We say you're going to need this much space, right? How much space do you need for this algorithm? It's the same thing here. So that's what we're calculating is the space and time requirements for the algorithm or the application to run on the quantum computer. And the key is to look towards...

What space, right? How much space, how many qubits do I need to do a calculation I can't do on a classical computer, right? That's key. And that can start at around, you know, just... just upwards of 50 qubits, you can show something you can't do on a classical machine. But really, we want to push that towards useful. calculations right what useful calculation it's a very nice there's a very nice

kind of academic problem that you can show it, you know, just upwards of 50 qubits. But we want it to be useful. And so that's where we need at least 100 qubits. to show something that's useful for the academic or scientific space, right? In material science, for example, you can start looking at quantum magnet models at around 100 qubits. But there's a catch on these qubits, right? These qubits have to be very

very reliable. And so it's not just any hundred qubits. It's not just your hundred physical qubits that exist today in quantum computers. Those qubits aren't good enough to show a problem that outperforms a classical one at that hundred qubit mark. Those hundred qubits that are out there today, they're just not good enough, right? Right now, they're at about one error every thousand operations. And when we talk about a...

a problem that we can't do on a classical computer at 100 qubits. Those qubits have to be really good. They have to be much better than one in a thousand operations failing. We need them to be closer to Something like 1 in 10 million operations. Only 1 in 10 million operations fail, right? Or 1 in 100 million operations fail. So the key is making these qubits better and better. And to make these qubits better and better, we use what we call quantum error. correction.

At Microsoft, we've been working on this qubit virtualization system. This is how we make the qubits better in the machine, right? We use more qubits to make them better. And then in turn, this creates these virtual qubits that you use. for your application. In your computation itself, you're using these virtual qubits we've created. Sometimes they're called logical qubits.

these virtual qubits are used instead of the direct physical qubits themselves, because the physical qubits themselves are just too noisy, right? So this way we can bootstrap and improve. how good the qubits are for the algorithm. And that enables you to do more complex calculations, right? You can go through a more complicated algorithm, and that's what we need, right, to show things that we can't do on a classical machine.

That makes a lot of sense. The parallel that pops to my mind immediately is error correcting codes, ECC, and memory, where you add extra bits to classical data. And here you are. doing a similar thing with quantum error correction techniques. So your logical qubits have multiple backing physical qubits. That's exactly right. Interesting. Okay. So that explains why, for example, recently at Microsoft Ignite, you created 20 logical qubits from 80 physical ones and then ran an algorithm on it.

Yes, and we were able to drive that up to 28 logical qubits, right? So 28 qubits that were used in an algorithm in a computation, right? And we were able to show that those 28... qubits, right? These 28 logical qubits are better than using 28 physical qubits, right? Than directly using the physical qubits in the machine. And that's key, right? We have to show that we can create better qubits, much better than the physical.

ones to do bigger and better computations that are useful. Is this virtualization? I mean, we've already reached virtualization in the quantum computing space. Yes, so we're doing what we call, we call it qubit virtualization, right? Because we're creating these virtual qubits through the use of quantum error correction, right? Which is very similar, as you just mentioned, right? Similar to what we do classically in error correction.

And then we create these virtual qubits and use them at the application level. So you said that it's important to do useful things, but then you also called out that there's this moment when you are able to prove that something is...

better done in a quantum space than it is in a classical computing space. And I hear this term quantum supremacy used a lot. I guess it was... mentioned in a couple of years ago and Google and IBM and all of our different friends that are, I'm sure your colleagues at other companies use this term. quantum supremacy. Are we trying to prove that quantum computers are better? I thought we already proved that they are better at different things. So help me understand that term.

Yeah, that term, you know, we tend to use the term quantum advantage. We prefer the term quantum advantage to the term quantum supremacy. And, you know, because that's what are we trying to do? The whole point of building quantum computers is to do something new and different and better or faster than we can do with existing computational techniques, right? digital computers, supercomputers. We want to be able to do things those can't do.

We want access to computation that's not possible today. And we can do that by taking advantage of quantum computers and what they're good at. So we need, you know, the whole purpose is to compute with a quantum computer and to do compute that's different or better or faster. So when we talk about quantum advantage or...

If we want to look at climate supremacy, that's an indication. It's been demonstrated for physical qubits that you can do something with physical qubits you can't do classically. But it's still an academic style problem, right? It's a problem that doesn't apply to, let's say, everyday scenarios, right? What we want to do with a quantum computer ultimately is help.

identify how to capture carbon from the atmosphere right we want to help um understand how to make a better industrial catalyst so that we can produce fertilizer in a better, cheaper, more efficient manner in ways that don't harm our planet. So we quite literally want to go after problems, computations that will help save and feed our planet. And that requires... continuing to build better qubits. Again, use qubit virtualization to enable better qubits so that these qubits then can compute these

you know, problems that are hard for classical computers, quite literally intractable, right, for classical computers. And so really, that's where we... We closely look in chemistry and materials science, right? So understanding catalytic chemical reactions, namely, you know, I add another molecule, I add a catalyst. I look at that chemical reaction, I want it to be more sustainable. I want it to be more environmentally friendly. I want it to help, for example, how can I produce clean water?

you know, efficiently, clean fuel, clean air, right? So really thinking about, you know, a lot of the problems really focusing there on sustainability, right? These benchmarks are largely, like you said, they're largely academic. It's lovely that we can have 100 qubits that are generating a random number distribution, but that is not really interesting.

generate random numbers on a classical machine. You can generate a quantum machine and say, yeah, it would have taken 25,000 years on this machine. We did it in a couple hours over here. Yay. You're not interested in that as much as you are in this, what I'm understanding to be the quantum ready initiative, which is a really a focus of we made quantum computers to solve real problems. That's the quantum advantage.

Yeah, the true quantum advantage comes when we're showing that we can solve useful, valuable, practical problems, right, that impact all of us in a positive way. right that that's true quantum advantage right that that is That is the core mission and purpose of building a quantum computer. And to do that, we have to have what we call level two.

resilient quantum computing, right? We talk about level two being transitioning from using physical qubits directly to using these virtual logical qubits, right? And then ultimately, we need to scale that up, right? We need to scale up to having a thousand. logical qubits, virtual qubits that are really, really reliable. And underneath the hood, right, that will require upwards of a million physical qubits.

You know, ultimately we want to get to that scale. So we have to continue to move, you know. move towards showing that useful practical quantum advantage and then continuing to scale up so that we're solving useful problems with these machines. Now, I always caution people who are taking things that they don't understand, and I caution them from saying just. But I will say just, like if we've proven that 20 is good and 40 is good, why don't we just...

Make one with 10,000. Is this going to be exponential in its growth? Is it just going to be, got to just keep grinding them out, got to make these qubits? Why don't we just make the giant one right now? Well, there are, you know, I think one of the key challenges why we need these what we call logical qubits, right, why we need qubit virtualization is because the physical qubits are noisy. Right. And when I keep saying these things can be noisy, they're noisy.

This is part of the challenge of scaling up, right? Why can't we not just all of a sudden stamp out 100,000 or a million of them? It's because we really do need to control, right? We need to be able to control these qubits. We need to control them. very carefully, right? And that as we control them and we operate on them, we don't cause too much noise in the system, right? So it takes

It really does take careful engineering to scale up that control system. This is one of the beauties, right? At Microsoft, we are also building what we would call a first party quantum machine as well, right?

Are we co-designing and co-building a machine with atom computing together based on this neutral atom platform? But we're also building a quantum machine based on what we call topological qubits. Part of the beauty of... these types of qubits that we're also working on is that they have this nice way of being controlled. They are fast in their speed, right? We also have to look at the speed of operation of qubits. So they are fast. They are digitally controllable, right? So they can be...

controlled with digital control, which is very nice. And then they have the right size, right? We can fit a million of them on, you know, the chip on your credit card. And so it's these properties, right? that we see as enabling that scale, right? That path to scaling up to a million, you need something that has the right controllability, the right size, and the right speed.

When I think about Moore's Law and all the things that Intel and ARM and our classical computing friends are doing, we always start talking about, and it gets so small that now the laws of physics become a problem and there's thermal fluctuations and imperfections.

Are those kinds of the same things? Like qubits are noisy because of the control electronics, because of the environmental interactions. Yes. Like even an electromagnetic field. Like someone gets a phone call. Like Krista gets a phone call. No, no, you ruined the whole thing because electromagnetic.

Yes, we have to look at, you know, all of the different ways noise might creep into the system. And, you know, I think you've really nailed it, Scott, right? One of the ways is like the thermal environment, right? Heat, heat. can make these qubits, you know, feel noise, right? They can become noisy with the heat. And that heat comes from controlling them, right? We need to be able to control them. So we have to send lines and signals to them, right?

So yeah, indeed, you have to look at many different mechanisms for protecting them from different environmental noise, which includes, for example, heat as one possible source. Now, I understand that on January 14th on the Azure Quantum blog, folks can go and read about the Quantum Ready initiative. I know that you've talked about this for years. Like you talked about a quantum ready workforce.

five years ago. Right now, though, I think that AI is kind of sucking up all the air in the room. But is quantum and AI and classical computing going to be all three legs of the same stool? Yes, yes, exactly. So these applications, they are, as I mentioned earlier, they're hybrid, right? AI and quantum computing. will work together. They already do. We've in fact already shown and demonstrated an end-to-end workflow that uses logical qubits and reliable quantum computing.

with AI. So we take the data from that quantum computer and we feed it to an AI model, right? So these are the types of, and actually we pre-process that workload in Azure, right, on HPC high-performance compute. So it really is truly hybrid, and it has to integrate across all. This is why we have the Azure Quantum Platform, which enables that seamless integration across the different types of compute there.

Ultimately, in terms of the skilling and the workforce, what do you learn here? You want to be able to create these very hybrid applications. But AI and quantum go hand in hand. They do. And that's a really important thing to say. What you read in the news about large language models, there's going to be quantum.

computing, feeding the data because your AI is useless without really good data. That's right. You really nailed it when you explained to me that the result of these calculations will feed those models that will hopefully then solve real, real problems. Yeah, your models are only, your AI models are only as good as the data they're trained on. And now with a quantum computer and a reliable quantum computer, right? You need a reliable quantum computer. With a reliable quantum computer.

you get data you do not have access to classically. So when we talk about a quantum computer showing quantum advantage of being able to solve problems we can't solve classically, the other way to view that is when it solves such problems, it gets data. That data you can take and feed to an AI model alongside other classical data, right? And together, this will produce a faster and more accurate.

AI model for different types of problems. And so it becomes more predictive. And that's super powerful, right? you can produce data, right? Kind of this golden data, right? From a quantum computer that I can't produce with any other mechanism. And because AI models are only as good as the data they're trained on this.

This enables that AI model to take a leap ahead. Fantastic. Well, thank you so much for spending time with me today. Folks can go and learn about... quantum computing and they can check out the quantum blog i'll have links in the show notes and i also want to encourage folks to go to microsoft learn to learn about the quantum programming language q sharp you can look at a quantum simulator build your own q sharp simulator and learn about

if the problem that you have might be better solvable with the quantum advantage. And we'll go off and learn about the Quantum Ready Initiative. Thank you so much, Dr. Savory, for spending time with me today. Thank you, Scott. This has been another episode of Hansel Minutes, and we'll see you again next week.

This transcript was generated by Metacast using AI and may contain inaccuracies. Learn more about transcripts.