
Welcome to episode 46 of the new quantum era. I'm your host, Sebastian Hassinger. I'm at the two thousand twenty five APS summit, and I want to thank the APS for providing the space, for me to record. Today's episode, actually was recorded before the summit. So just the week before because of scheduling challenges, but this intro is being recorded in the room that APS has kindly provided.
I'm excited to bring you a conversation with someone familiar to longtime listeners, Steve Girvin, professor of physics at Yale University. Steve's been a key figure in the Yale Quantum Institute and has been at the forefront of quantum computing research for many years. He's been a key player in sort of creating that vibrant superconducting quantum computing community that's emerged from the university. He first joined us back on our fifth episode, in 2,022, and I'm really excited about this conversation. We, got him to take us through sort of the ins and outs of quantum memory.
So, something that's not discussed as often as as qubits for computation is storing of quantum memory. So think about RAM in a classical computer. It's where the work is stored in intermediate stage while the CPU is working on other bits of active memory and swaps it in and out of the the working space and the storage space. And that is going to be required for fully functional quantum computers as well. So Steve does an incredible job, I think, of taking us through what exactly quantum memory is and what it what the design of that memory is gonna entail, how it's gonna be accessed, things like error correction, some of the roots, even going back to the beginnings of classical computing.
So it's it's a really fascinating conversation. And this interview with Steve comes to you with support from Quantum Circuits, the New Haven based company that was launched out of Yale and cofounded by one of Steve's colleagues, Rob Shulkoff, another key figure in superconducting cubits. Quantum circuits is focused on delivering its full stack acumen system that attempts to correct errors first and then scale. So quantum circuits is leading the way in error detection built into its dual rail qubits. And let's dive in and hear a conversation with Steve.
Enjoy. Welcome back, Steve.
Well, thank you, Sebastian. It's a pleasure to be back and usually have fun chatting with you. It's good to See you again.

Same. Let's hope so. Yeah. So I was hoping today, Steve, that we could talk about your work in quantum memories. It's it's a a topic that doesn't come up as often in quantum computing because we're focused on, you know, sort of numbers of qubits and fidelity of qubits and error corrected qubits and the computation part of the, you know, the actual workhorses of quantum computing.
But quantum memory is I remember going to QIP a few years ago and realizing that about at least half of the talks started with, well, assuming we have quantum memories, we'll be able to do x, y, and z. So it's a really critical part of a of an eventual sort of productive architecture. So so what what is quantum memory?
Good question. So let let's start with the fact that most existing there are many different technologies for realizing quantum bits, qubits, And generally, they don't faithfully hold their information, nearly as long as we need for them to do. They're quantum objects that consist of states that are superpositions of some ground state and some excited state, like in an atom or in a superconducting circuit, and things go wrong. The energy leaks off into the environment or the frequency at which this thing, it turns out, it's oscillating around because of quantum mechanics, fluctuates a little bit and you lose track of the a quantity which is not present in classical bits, the the phase of the superposition. So you can have bit flips and a new kind of error called a phase flip.
And so there's a lot of work just trying to get single bits to remember their job long enough that we can do computation, you know, and so forth. But, so that's one simple answer.

That would be I mean, essentially coherence time. Right? That's sort of the Exactly. The generic wave. Yeah. Okay.
Exactly. And, you know, there's a fundamental theorem of quantum computer science. There's no such thing as too much coherence.

No. Yeah. If
we make it a hundred times longer, the algorithm people are gonna say, great. I wanna run a program a thousand times

longer. Right. Right.
There is no limit.

I guess the corollary to that is the universe does not wanna give us more coherence time either.
Right. It's not yeah. It seems to be like a law of thermodynamics, you know, that you you the first law is the best you can do is break even. The second law is you can't break even. So but then there's a more interesting so don't know whether to call them data structures or memory architectures.
But, you know, let's let's remember that in, like, a classical processor, traditional processor, you have cache memory and you have RAM and you have in in my day, we had tape and so forth. But there there are different speeds of access of these things.

And time scales. And and, I mean, there's
Time scales.

There's far more random access memory space in a classical CPU than there is actual working space. Right? Sort of temporary storage where
Exactly. Yeah. Exactly. Yeah. And it it's possible to make memory cache memory you can access very quickly, but it's but not in terabyte scales that we use. So you need slower but bigger memory. So there's that that kind of architecture and all the crazy things you have to do to keep track of, are my bits in cache or are they over here? You know? All that stuff, which I can't imagine how it works, but people have figured out how to make

it work. Well, it's easier that you can measure a bit without destroying it. Exactly. They have that advantage.
Exactly. Qbits are are a little trickier than we got. So so but then there's that there is this interesting structure called a QRAM, a quantum random access memory, which I'll attempt to describe. It would be easier if I could show pictures, but, I'll try to I'll try to describe this in an audio only format. And what is this good for?
Well, it's and it's it doesn't exist yet. Let me let me explain that. Nobody's actually built one yet. We're we and others are working on it. But and it's a not well advertised or often swept under the rug feature that if you wanna do anything with big data in a quantum computer, you need a way to get all the classical data encoded into some quantum state inside the quantum computer if you're going to do calculations on it.
So people talk about quantum enhanced machine learning and various kind of big data tasks. And those will given that, and and you can do this with quantum random access memory. But, you you know, since they don't exist yet, not even crude ones, we don't even have ones that don't work yet. We just don't. We don't have any. Right.

We have qubits that don't work, but we don't have quantum RAM that
doesn't work. Exactly. Exactly. So this I I believe that as we build larger and larger and more and more fault tolerant quantum computers, we'll be able to do interesting things, but things involving big data will be the last thing that we succeed in doing. Okay. So that's my giant caveat at

the beginning. Yeah. Yeah. Well, I mean, it's I think it's worth pointing out. When you say big data, you mean literally anything larger than the number of qubits that we physically have in in a device. Right? I mean So
in fact that's right. So as I'm going to explain, you know, obviously, a classical memory has to have as many memory cells as data you wanna put in there. And the same is true in quantum random access memory. You actually need maybe about twice as many because of something I'll explain. But and we don't have exponentially large numbers of qubits.
And if we did, they wouldn't work very well enough to to treat exponentially large problems. But so let's let's talk about how you define how big is the size of the situation here. Let's say we have a memory, whether it's classical or quantum, and it and it has it the data lives at an address and there are n bits in the address, then you can have two to the n different locations. Okay. So so we'll say the size of the memory and the size of problems we're addressing is exponential in the number of address bits.
Okay. So if I had, you know, 20 bit address, that would be about a million sites. That's not very big by classical standards, but it's enormous by I don't think the world has built a million cubits yet.

So Probably not.
So so so okay. So just you know, so everybody's starting with the simple story. How does or how shall we pretend that a classical ordinary RAM works? You give it your address strings, and it goes and fetches the data at that address and puts it in puts it into a register. And and then you

Where you can actually work on it.
You can work on it. You can see what it is. You know? What okay. So so we're gonna we're gonna put, quant we're gonna start with classical data, you know, some database, a phone book, something.
And we're gonna put the data into a quantum register associate that you know, you give it an address, and it gives you back a quantum register that has the data in it. So so far, it's exactly the same. But there's this weird superposition principle. Now imagine that I send to the memory a quantum state, which is a giant superposition of all possible addresses I could give it. Right.
And then what comes back is entangled with each address is the data associated with that address. Okay. So if your head isn't hurting yet

Of course. That's why I do this podcast because I really want my head to hurt all
the time. You know, you missed the memo somewhere. Yeah. So this is an amazing, complicated, entangled state, which in itself is pretty interesting from a scientific point of view. But you've now encoded all this classical data, which was kind of the the quantum random access memory fetched from some classical register and put it into this quantum register.
It's all in there all at once, the whole database, even though it's sitting in, like, one register or two registers. One for the address and one

for the
data. And you don't know which address it is, but you know that the data that goes with that address is in the other register. So if you were to measure what's in there, it would randomly collapse to one address and the associated data. But that's kinda useless because it it doesn't offer any quantum advantage. You something you have to have this entangled quantum state.
And there are algorithms. People have thought of a classification of large data with machine learning methods and on on on, which you or or doing doing suppose you wanna do linear algebra, you know, solve some equations on giant matrices whose entries were you looked up from the classical data, something like that. You you need this resource in order to execute these algorithms that people have figured out. And Scott Aronson calls this the the fine print that

Right.
Getting this exponentially large amount of data into your quantum system, sometimes people, don't put that in very fine print.

They hand wave past it. Yes. Exactly. Okay. And this also, just generically, this would be, like, state preparation is sort of generically what
Yeah. So let yeah. Let's talk about what you can do with this. So you can you can take classical data and put it into a quantum state. So you could have a quantum state.
I want to produce a certain quantum state because somebody told me it represents the ground state of some big molecule, you know, where all what all the electrons are doing. Or or, you know, you wanna prepare this state because you need it at as the input to running some algorithm. Or you want to the important thing about the state is that that it has all the numbers for some alge linear algebra problem you want to solve. Okay. So so state preparation is one thing.
Another thing is construction of oracles. So black boxes, which sort of compute a function.

Right. This is and all the way in in quantum algorithms goes all the way back to the Deutsche Joshi. Right? I mean, that's
a black box function. Yeah. So so you could think of I said, oh, you put in an address, you get back some data. But you could say, oh, the address is, you know, x, and I want to know the value of the function f of x. Right.
And and that's stored as data at location x, and I bring it back. So so, basically, Oracles evaluate function that takes a binary input and gives you some binary output. And that's, like, you know, that's what all classical algorithms do. You give them an input, gives you some output that's a function of that. So that's a very powerful general thing that you that many quantum algorithms use.
Okay. Good. So there's there's a bunch of things like that we're having where this QRAM, you know, it's just a RAM that works in superposition or it's a function evaluator. It's a way of of taking stuff that's encoded classically and letting your quantum computer use it. Okay. So how how do you know, what's the architecture of these things? What do they look like? Well, so it looks kind of like an upside down tree. It's a binary tree. The root is at the top.
And then you're you know, imagine driving down the road and you come to a fork in the road. And as my friend Shankar says, because it's quantum mechanics, you take it.

Yeah.
But let's just think classically for a minute. I choose to go left or right instead of both. And then I make another decision, left or right, left or right. And I and I branch into smaller and smaller branches. And at the very bottom of this upside down tree or this funny road network are the the leaves where the classical data is waiting to be picked up and and returned.
So the you have to have n branches in the tree so that you end up with two to the I mean Right. N forks in any roots. So you get two to the n leaves you could end up on. So, but you so at the first level, you have one. You need a router that somehow decides where to send things. There's one router, then two routers, then four routers, then eight routers. Pretty soon, you've got a lot of routers.

Yeah. I remember the story about the chess board and the the crazy rise. Yeah.
Yeah. I remember yeah. That that made a big impact on me when I

was It's a good one.
Isaac Azimov, one two three infinity is where I first read that. Okay. So I need what I need to do is somehow given the address, I need to send I kinda need to route a quantum bit down through this tree to the place where it can pick up the correct classical data. Let's say it's it's returning only one bit just to keep it one key bit just to keep it simple. So we're we're thinking hard about how to build these routers.
And and then you so then there's this complicated thing. Like, how do you tell it the routers what the address is so they know which way to route you?

Right.
And there's a without going into details, there's this cool version of this called bucket brigade, which is actually how classical memory works where you send in the address bits, and the most significant bits stays at the top, and then you use that to route the next most significant bit. And then it use those two to route the third most significant. And, eventually, it kind of lights up a path down through the tree that guides your extra qubit to go down and pick up the classical data. So this is doable. It's a it's a very challenging problem because you have to route quantum data using a quantum decision about whether to go left or right, meaning you may have to go both left and right if it's in superposition.
So it's quantum routing of quantum data. And that's you know, we sort of have demonstrated some of the right technology, but we haven't put it together yet into a a useful QRAM. Superconducting qubit, people are thinking about it. Rydberg Adam, people are thinking about it. And, you know, we'll we'll be able to do small demonstrations soon, but building a a a gigabit QRAM is a result.

So and the the approach you've been taking, I think, at Yale has primarily been in sort of the in that the upper energy levels of of a superconducting Qbit. Right? Like, the QTrit space?
So yeah. So we have some options. So we have these Transmon qubits that were developed here at Yale, and they're like many things we call qubits, they actually have more than two levels. And you if you use the first three levels, you have a QTrit. And you yeah. These routers need to or in one version of the architecture, the routers need three states. Sort of idle or route left or route right.

Right. Okay.
Well, a wait state, I guess. So it's called w r and l. Wait, right, and left. And you could do that with three levels. You don't you can actually do it with two levels.
It's a little bit simpler in some ways. But, so that's one piece of one quantum object you could use. We've done some work recently, with my, former postdoc, Danny Weiss. When I say we, I mean, Danny Danny did this quite nice work where the the qubit is what we call a dual resonator or in the optics language, dual rail qubit where you have two two boxes and there's a microwave photon, one total microwave photon. It's either in the left box or the right box or a superposition.
Those are the qubit states. And there's some advantages for using those because it's easy to move microwave photons down through a tree structure like this using what we call beam splitters. And, and so and the fidelity with which we can do that is quite good. Three nines for each hop. So after 10 hops, it's still 99% for the morning. So he came up with

So sorry. Would each hop then relate to one of your branch in the trees? So you'd Yeah. Exactly. You'd have two to the ten for register space at the end of ten. Okay. Great. That's not bad. That's not
bad. Yeah. It's not bad. And it's interesting that some earlier work, which was pioneered by Connor Han when he was a student here with Liang Jang, looked at the, you know, the error the error situation. Like, you we have x.
If we have n bits of address, we have exponentially many qubits and and routers and all these components, all of which are having errors. And so if you had you know, at the bottom of the tree, there were a million leaves and the probability of an error was, you know, one in a thousand, then on average, you'd have a thousand errors. Mhmm. Mhmm. So it sounds sounds hopeless.
You could never you could never do this. But he showed that because this is for tricky reasons, it's it's called a shallow circuit. It's only the depth is only equal to the number of address bits, so the depth is only 10. And if you think about any one path down through the tree to its intended address, that's just one out of a million. And chances are that there's only roughly one chance in a thousand that that particular path has the error.
So it turns out the error situation is much less drastic. It's like one of the few times in my life when, you know, it it wasn't the worst case scenario.

It's better than we expected. I do think, working in the space that you're working in, requires a very thick skin to worst case scenarios.
Right? Yeah. Exactly. So so people were in a panic about this, but, Connor's work and some other, earlier work showed that, you know, it's actually you know, it's not fatal. Let's put it that way.

And would there be a way potentially to do error correction in that quantum memory space? I mean Yes.
It it's expensive because it was already expensive to get a million qubits. Now you need a million logical qubits, and you need you need sort of routers which are somehow fault tolerant and some voice. So it's you know, people are starting to think about this, but it's it's gonna be complicated and expensive.

Well and going back to what you said about coherence time, you know, you've done some work in three-dimensional cavities showing coherence of up to, like, a second, I think. Right? I mean, like, really, really long.
We haven't I mean, the the one second cavity work was done at Fermilab.

Okay.
And then we've done sort of, you know, a half to ten milliseconds at Yale. And then one of Rob Shulkoff's one of our alumni from Rob's group in Israel has Serge Rosenblum has built a really nice thirty five millisecond lifetime cavity that has a cubit in it, and he can control it and make you know, probably holds the world record for largest Schrodinger cat state, which has a 24 photons in it. Amazing.

So so so does that does the the, I guess my question is, like, does expanding the coherence time, improve sort of the outlook for quantum memory?
Yes. Even at the single qubit level, one dual resonator, dual rail qubit, if the if those cavities have, ten millisecond lifetime instead of one millisecond lifetime, that's that's a huge advantage. And and as I said, if you assemble millions of these together, now instead of having a thousand errors, you might have only a hundred errors. And and in any case, that's out of a million possible places, the errors can happen if it's if there's only a hundred of them that are bad, the fidelity is actually still quite good by this funny funny argument that we came to understand.

That's interesting. So so does I mean, is it safe to say that that the fact that, you know, Quantum Circuits Incorporated and Alice and Bob and Nord Quantique, there there is a number of new sort of entrants in the superconducting space that are preparing to or are launching sort of commercially available devices that are using these alternative superconducting qubit designs that you've mentioned, like cat qubits and dual rail Right. That that may sort of be a step towards I mean, at least providing the the the materials, resources for people to do more experimentation in quantum memories?
Yes. I think, you know, there's a sort of threshold theorem that if you get the error rate low enough, then building larger and larger error correcting codes will make things better and better instead of worse and worse. And so the the hope is that by devoting some effort at the very lowest level of the hardware, the individual qubits, getting their lifetimes and their gate fidelities higher and higher so that you're way below this error threshold, then the rate at which the memory lifetime grows as you make the code larger and larger is very, very fast. If you're just barely below the threshold, it's barely growing. And we sort of feel the right way around to do things is to get well below threshold and then scale up.
That's sort of our mantra. Yeah.

Yeah. I mean, it it really is fascinating to me how this superconducting qubit is evolving into these alternative sort of more more challenging, I think, mechanical designs. Right? Electronic designs. Yes. Harder to fabricate, but the end result is a higher quality qubit.
Yeah. That's right. And, you know, some of them, like the three d cavities, are physically large, sort of centimeter scale.

And we do that. Large? One centimeter. Wow.
Yeah. But you can make two d versions of this with micromachining that make the total volume much smaller. And, you know, as I may have said the last time I was on here, you know, Rob Shulcove likes to say, in a cubic meter fridge, there's a million cubic centimeters, and it would be a quality problem to run out of Yes. Cubits at that level.

So Indeed. Yeah. Indeed. May may we get to that problem in our lifetimes.
Exactly. Exactly. Yeah. So it's it's you know, this is a it's an interesting you know, people talk about the full stack, you know, from the hardware there, the the controller, and the compiler, and all the way up to applications. And, you know, we need an interdisciplinary work at every level of that stack to get this thing looking. And just think about the investment that's been made in the classical software stack. I mean

Absolutely.
When I started prog the first time I saw okay. Cathode ray tube, do people even a screen. Let me just call it a screen. And you and you could you could edit instead of making a new punch card. Right. Thought, wow. You know? I I reached Well,

going even further back. Right? I mean, this is why I I'm constantly in conversations with you and others who are at this sort of, you know, at the the limits of these designing and trying to build these architectures, I think about Von Neumann and Princeton IAS, right, and and using cathode ray tubes as RAM as the first classical memories. And and the there was a mercury switch too that they were using to sort of Yeah.
They had they had these long long tubes of Right. Mercury, which, you know, wouldn't be allowed today. Yeah. And they would launch a sound wave in one end, and it would come out the other end, like, a millisecond later. And then they would have to resend it back to the beginning and keep it going around and around, little amplifier, to store it for more than a millisecond. And this was, you know, this was a big deal in those days.

Exactly. Exactly. So, I mean, it's it seems like, you know, impossible challenge on top of impossible challenge, but that's probably how Von Neumann and the rest of the crew at Princeton felt too.
Abs absolutely. And some of the, you know, the early concepts of fault tolerance in in networks of switches and computer networks were invented by von Neumann. In that era, he was partly inspired by the fact that the thing would only run for a few minutes before one of the vacuum tubes would go up. But he was also interested in how the human brain, you know, it's just some random wiring of neurons with feed forward and feedback and how that could reliably compute.

And It's a good question.
Yes. And maybe

it's Can it reliably compute? It's
not exactly. But

We'll find out.
Those ideas of classical fault tolerance play a big role, you know, in the in the quantum drive to achieve fault tolerance, repetition codes, all those things. So, you know, it's we're still building on you know, we're still benefiting from those early crude experiments.

Yeah. Absolutely. Excellent. Well, once again, Steve, an excellent conversation. Thank you very much. That's been it's been really enjoyable and and a fascinating look into yet another impossible challenge in going computing. Right.
I mean, I we can only hope that future generations will look back and say, boy, that must have been hard back in, you know, 02/2025, but look how far we've come.

Exactly. Exactly.
We can hope for that. We can. Yeah. Thank you. Pleasure. Thank you very much. Yep.

That's it for another episode of the podcast. Thanks for joining. I want to thank Steve Girvin for the a delightful conversation. I want to thank the support of APS and Quantum Circuits. You can find past episodes on the web at www.newquantumera.com.
You can find us on BlueSky at @newquantumera.com. If you have any feedback, BlueSky is the best place to do it. You can just reply to the episode that you are commenting on. We'll see it, and we'll definitely respond. So any feedback, any suggestions of future guests, future topics, all are welcome. The podcast is a production of New Quantum Era and music by Omar Costa Hamido and your host is Sebastian Hassinger. Thanking you, and see you next
time.