Bridgewater's Greg Jensen on AI, Inflation and What Markets Are Getting Wrong - podcast episode cover

Bridgewater's Greg Jensen on AI, Inflation and What Markets Are Getting Wrong

Jul 03, 202358 min
--:--
--:--
Listen in podcast apps:

Episode description

Every industry is trying to figure out just how AI or Large Language Models can be used to do business. But Bridgewater Associates, the world's largest hedge fund, has already been at it for a long time. For years, it has explored AI and adjacent technologies in order to analyze data, test theories, develop novel investment strategies and help its employees make better decisions. But how does it actually use the tech in practice? And what's next going forward? On this episode, we speak with co-CIO Greg Jensen about both the possibilities and limitations of these advances. We also discuss markets and macro, and why he believes that investors are still too optimistic about the Federal Reserve's ability to get inflation back to target. 

See omnystudio.com/listener for privacy information.

Transcript

Speaker 1

Hello, and welcome to another episode of the Odd Lots Podcast. I'm Tracy Alloway.

Speaker 2

And I'm Joe Wisenthal.

Speaker 1

Joe, I think it's fair to say there is a lot of excitement about investing in AI. There is also a lot of excitement about using AI to invest.

Speaker 2

Yes, I mean it's I think there's like a new like chat ETF I saw an ad for and there's like, oh, we're getting No. I think I saw another like project. It was like, we're gonna have chat GPT pick the stocks for us, And you know, I get it. It's kind of exciting and maybe there's some new way of like these super advanced digital brains that can beat the market, et cetera. But like, I don't totally get it.

Speaker 3

Well.

Speaker 1

I also feel like there's a tendency nowadays for people to talk about artificial intelligence in a sort of abstract manner. You hear people bring up AI almost as a synonym for just software at this point. I think you pointed out recently that the Kroger CEO mentioned AI like times on the earnings call. So a supermarket chain, right.

Speaker 2

Yeah, And you know it's like machine learning, tech, algebra, algorithms, it's all existed for a long time quantitative investing, but it feels like because of the excitement around a few specific consumer phasing products that have been unveiled over the last six months and the way they've captured people's attention, people like you know, suddenly there's a lot of interest in like, how are companies using this tech to do something?

Speaker 1

Yeah, well, I'm glad you mentioned that because today we really do have the perfect guest. This is someone we've actually spoken to about AI before last year, in fact, someone who is at a firm that has a lot of experience using machine learning and of different types, and we're going to get into the differences between all those technologies. I'm very pleased to say we're going to be speaking once again with Greg Jensen, the co chief investment officer

at Bridgewater Associates. So, Greg, thank you so much for coming back on OD thoughts.

Speaker 3

Yeah, it's great to be here. Exciting topic.

Speaker 1

Yeah, So I actually revisited our conversation from last year, I think it was in May of twenty twenty two, and you said two things that stuck out in retrospect. So number one, you said that markets had further to fall, which turned out to be correct. And two you brought up artificial intelligence as a major point of interest for Bridgewater, And this was all before chat GPT really became a thing and everyone started talking about AI at every single

conference and earnings call and so on. So I guess, just to begin with, maybe you could lay the scene and going back to Joe's point in the intro, we are used to hearing these terms. So Bridgewater does machine learning and systematic strategy strategies and quantitative trading strategies and AI and things like that. What's the difference between all of these things and how do they relate to each other at a firm like Bridgewater.

Speaker 3

Yeah, great question. So I think to answer that, let me take a step back for a second and give you a little bit of my background, because it all kind of comes together in a way. You can connect these different pieces. So you know, even as a kid or whatever, I was certainly interested in kind of translating and predicting things using some mix of my thinking and technology.

So I can think back to in the late eighties using stratomatic baseball cards, know what they are, but programming them into computers to try to calculate the way to create the best baseball lineup and use that in fantasy baseball type situations and similar things wither and whatever, and try to learn how to kind of use technology to combine with human intuition to get at what was different ways to create edges. And then in college when I

heard about Bridgewater Bridge Order. It was a tiny place at the time, but the basic idea that there was a place where we were trying to understand the world, trying to predict what was next, but doing that by taking human intuition and translating that into algorithms to predict what was next kind of mixed two things that I loved. I love to try to understand the world, and I love the idea of having the discipline to write down what you believed and stress test what you believed and

utilize that. Right. So, if you go back, and this is now in the nineties kind of where artificial intelligence was at the time, most of the focus was still on expert systems, was still on the notion that you could take human intuition, you could translate that into algorithms, and if you did enough of that, if you kept kind of representing things in symbolic algorithm that you could build enough human knowledge to get kind of a superpowered human.

And Bridgewater was a rare example of where that worked. Where given the focus of trying to predict what was next in markets, given the incredible investment that we made into creating the technology to take human intuition and translate that into algorithms and stress tests, that it's incredibly successful expert system essentially that was built over the years, I'd

say probably the most profitable expert system out there. And that's really what Bridgewater has been about, which is building this great technology to help us take human intuition out of the brain, get it into technology where it's both then readable by let's say investment experts, but also runs on a technology basis. And that's kind of where algorithms, let's say, the mix of algorithms and human intuition it

was really important. You know, if you go through the history of our competitors, they're littered by people that tried to do something more statistical, meaning that they would take the data, run regressions, and then after regressions, let's say

basic machine learning techniques to predict the future. And the problem that always had is that there wasn't enough data like, the truth is that market data isn't like the data and the physical world in the sense that a you only have one run through human history, you don't have very many cycles, even cycles that debt cycles could take seventy years to play out. Economic cycles tend to plan around for seven years. There's just not enough data to

represent the world. And secondly that the game changes as participants learned, So the existence of algorithms, as an example, change the nature of markets such that the history that preceded it was less and less relevant to the world you're living in. So those are big problems with let's say a more pure statistical technique to markets. So you had to get to a world where statistical techniques or machine learning could substitute for human intuition. And that's really

where kind of the exciting leaps are. Now that you're

getting closer. It's not totally there, but you're much closer than you've ever been, where large language models actually allow a path to something that at least mimics human intuition, if not is human intuition, and that you can then combine that with other techniques and suddenly you have a much more powerful set of tools that can deal at least in take a big leap forward on dealing with the problem of very small data sets and the fact that the world changes as people learn in a way

that up until the big breakthroughs in large language models, I think we're much further away. So that's a huge change in the limits of ways that statistical machine learning could affect something with small amounts of data, something where the future varies from the past. All of those problems

we're closer to having. At least way is to take on more and more of what humans have done at Bridgewater, what humans generally do in investment management firms, And that's that's a huge leap forward that's going on now.

Speaker 2

I have one very short quick question. I realized just know that not long after we talked to last year, last spring, like a month later, you won your first World Series of poker bracelets. So congratulations on that. I at least say that because you mentioned poker, did you play the World Series this year?

Speaker 3

I'm heading out actually after this.

Speaker 2

Because I know there are okay, congrats and good luck.

Speaker 3

Yeah, And it kind of connects to this because I never get to I don't get to play very much poker, but I really studied what machines were learning about poker. So much has been learned in the last five years, ten years, and and one of the you know, basically trying to translate that into intuitions that I could use, you know that basically can't actually replicate Peter Place. Poker are very complex way, but you can pull the concepts

out right. And this actually mirrors to what part of what we're doing at Bridge Order, which is that as you get to computer generated theories that if you can pull the concepts out of these complex algorithms, you know, you can make more of an assessment human assessment of whether they make sense and what the problems might be. And that's really a big deal. So there's actually a link between what I'm doing in poker, imperfectly for sure, and many of the concepts that we're trying to apply

at Bridge Order. And like you said, just we had talked kind of before the lms had really hit the public scene. But yeah, I mean, just to give you a little bit of background for me the you know, if you go back to twenty twelve, First off, we brought Dave Ferriucci, who had run the Watson project at IBM that had beat Jeopardy into Bridgewater, and that was that was a time when I was trying to experiment with, okay,

what can we do with more machine learning techniques? And Dave was trying to take what he had done to win a Jeopardy but actually put in more of a reasoning engine, because while what happened to on Jeopardy was impressive, it was pure data. It had no idea why it was doing what it was doing, and therefore really a lot of the path with Watson or whatever was going to be very hard to move forward with because because at its end, it was just statistical and it didn't

really have any reasoning capability. So Dave came to Bridge Order and later partnered with Bridge Order roll out of company Elemental Cognition that's focused on using large language models, et cetera, but overlaying a reasoning engine that essentially helps with things like hallucinations that out that large language models have and focus on how what is human reasoning and how does it work and how does that limit views

that are unlikely to be true? So that's one thing, And then in twenty sixteen or seventeen, I was introduced to open ai and actually as they transition from a charity to a company. I was one of the in that first round, and it was like met a lot of the people and looked hard at their vision to using scale and technical scale to build general intelligence and

build reasoning. So I both was working with Dave Rucci and sort of understood many of the people at open ai at the time and moving forward with those things. And then I was literally the first check for anthropic and other large language model kind of people that had

been at open AI. And so I've been passionate about this, realized trying to take different paths to how will we build a reasoning engine to overlay on statistical things, and a couple of different approaches that were being applied at the time, and obviously different they panned out to a different degree, but many things are coming together now to say, Okay, you can actually in a way at a pace and a speed humans can never do, you could replicate human reasoning.

And that's a huge deal. And if you could really break through that, you could start to apply it in so many ways in our industry, I believe, and obviously way beyond our industry.

Speaker 2

You talked about earlier generations trying to embed human knowledge. And I'm wondering, you know if an analogy is like I remember when Deep Blue came out and they had all the grand masters sort of work with IBM to like come up with this a great computer program that was basically as good or eventually better than Gary Kasparov. But then the next generation of a chess computers didn't

even have the grand masters playing it. It just learned the game from ground up and crushed those crush those previous generation. Is that sort of the what we're talking about here with with the transition from earlier engines to the new sort of LLM folks, which is like the sort of reasoning come becomes comes out of the computer rather than having to be taught directly by the experts.

Speaker 3

Yeah, I think something like that is happening. Right. You got that in chess because once you had the ability you had enough data and enough compute, you were able to do enough sampling that the pure that you got to the point where the pure data process, with good human intuition on how to build that data process, but a data process, was able to beat that those rules

based things. Now, chess, unlike markets, is you know a little bit more static in the sense that while while there are adversaries, and the adversaries they'll try to learn your weaknesses, it's more static in the rules of the game are steady and those types of things, so that that sampling could work right. Although it was interesting, I love the like because it is an analogy to some of the problems that pop up and will pop up

if you take Alpha go right on. The Go game Go got you also after Chess, obviously, but the Google was able to create this game that was beating the pros and radically beating the pros, killing everybody and getting better and better and better, although you know, I don't

know how up to the day you are. But then there was this loophole in it where that's that another person who was a mediocre Go player, but a computer scientiists who thought there might be a hole in this super AI used a little program to find the hole.

And what it illustrated was the a I had no idea how to play the game, because what a six year old wouldn't The mistake the AI was prone to was a mistake of six year old playing Go would never make where if you made a large enough in circling, if you now go works, but if you encircle the other guy's pieces, right, you eliminate them all. And something that would never work in a human game is you

make a really big circle. And because it never came up in human games, and because when they perturbed human games and started playing computer against computer, they basically started with a seed of human games, they never perturbed it enough to try this out, to try a massive circle, and a human would never let the massive circle have

it. It's so easy to defend against. But actually the best Go algorithm in the world allowed it to happen, right, And now a mediocre Go player with a little bit of AI found a way to beat this incredible Go game again because the Go algorithm at that time had this tremendous amount of data, but the things that weren't in this data wasn't aware of and it wasn't in

any deep sense understanding the principles of the game. So that's the type of you know, data problem you can have even with a massive amount of data played, you know, millions and millions of games, but to play every possible Go board, you'd have to there's more possible Go boards than there are atoms in the universe. So it was never going to calculate every possibility and it never got to reasoning, right, and that therefore that was a weakness, right.

And on the other hand, you mix that blend that even with a basic reason error, that a language model could come up with understanding the rules of GO and being able to talk about it. There's an element of knowing those things that humans already know that's possible with a blend of let's say a statistical technique like alpha GO was using and a reasoner to prevent these types of mistakes.

Speaker 1

I like that story because it makes me think I have a chance against the super smart supercomputer. Okay, that's kind of comforting, But I definitely want to ask you more about weaknesses in AI and large language models, but maybe before we do, you know, just sort of setting the groundwork once again. But when we see headlines like Bridgewater restructures will put more focus on AI, what does

that mean exactly? What does it mean for a firm, an investment firm like Bridgewater to build up resources in AI? And then secondly, could you walk us through a concrete example of how AI would be deployed in a particular trading strategy. I feel like the more concrete we can get with this, the more helpful it'll be.

Speaker 3

Yeah. Great. So I think as we restructured, one of the things that as we've made the transition at Bridgewater, you know, from Ray having the key ownership to ownership at a board level and that transition, we have done something we hadn't done in the past, which is essentially retain earnings in a very significant way, which allows us to invest in things that you know, are aren't going to be part profitable right right away, but are the

big long term bats that we're making, and certainly recognizing that there's a way to reinvent a lot of what we do using AI machine learning techniques to improve what we're doing to understand the world, accelerate that, and specifically what we've done on the aimlside is we've set up this venture. Essentially they're seventeen of us with me leading it.

You know, I'm still very much involved in core bridge Order, but the sixteen others are one hundred percent dedicated to kind of reinventing Bridge Order in a way with machine learning. We're going to have a fund specifically run by machine learning techniques which will take me into tracy what kind of strategies you can do, you know, that's what we're working on right now in that lab and pressing the edges of what AI is capable of now a like

machine learning is capable of now right now. There are big problems right A. You take large language models and they have two types of problems. One thing is the basic problem is there they are trained on the structure of language, so they usually return something that looks like good structure of language. They don't always return accurate answers,

so that's a problem. It hallucinates, It makes things up because it's more focused on the structure of what word or what concept would come next, then whether it's accurate in what comes.

Speaker 1

Can I just say when I hear AI hallucinations, it becomes so science fiction for me. It's very like robot stream of electric cheap kind of it's just so surreal.

Speaker 3

Yeah, well, I mean in this case, you can imagine what's happening, right because it's just what it's what it's trained on. Right. So if you're just if basically the basic concept is give it any stream of words and it'll predict based on having read everything that's ever been read. What comes next, right, and that if it's a little bit wrong in what comes next, it can misfire and give you something that sounds like something that could come next,

but actually wrong, you know. And it's just what it's trained on, right, It's trained to predict the next word. Slight errors in that create those types of issues. Now, the algorithm is pretty remarkable, particularly like we as I said, I've been tracking in AI as an investor for a long time and looking at their technology for a long time time. And you know, up until there's GPT one, two, three, and many versions of between, and then at GPT three

it started to have some use. GPT one and two were you know, barely coherent, GPT three was you know, somewhat usable for certain tasks. Three and a half, which is what CHAT GPT is, you know, got to a certain level, like on Bridgewater's internal tests, you suddenly got to the point where it was able to answer our investment associate tests at the level of a first year IA right around with chat GPT three point five and anthropics most recent quad and then GPT four was able

to do significantly better. And these are you know, at least what we thought were conceptual tests significantly better than our average you know, first year investment associate that went through training. And similarly, it's able to take the LSAD and do well, et cetera. So it can be basically

pretty smart. It is pretty smart in a wide variety of things with errors, but pretty smart on a wide variety of whether it's BMCAT or the LSAD or Bridgewaters internal tests or whatever, a whole wide variety of things. This is a big deal that it can achieve all of those kind of academic things, and yet it's still eightieth percentile kind of thing on a lot of those things, which is remarkable to be eightieth percentile on many many

different things. But at the same time, it's eightieth percentile for a reason. There are flaws, meaning it's not one hundred percentile, and so that leads to like you need to find a way to work through those flaws, right, and that's really where you know. So if somebody's going to use large language models to pick stocks, I think

that's hopeless. That is a hopeless path. But if you use large language models to create some theories which it can theorize about things, and you use other techniques to judge those theories and you iterate between them to create a sort of an artificial reasoner. Where language models are good at certainly generating theories any theories that already exist in human knowledge, and putting those things connect together, they're

bad at determining whether they're true. But there are other ways to pair it with statistical models and other types

of AI to combine those together. And that's really what we're focused on, which is combining large language models that are bad at precision with statistical models that are good at being precise about the past but terrible about the future, and combining those together you start to build an ecosystem that can achieve I believe can achieve the types of things that bridge order analysts combined with our stress testing process and compounding understanding process at Bridgeworker can do, but

it can do it at so much more scale, because all of a sudden, if you have an eightieth percentile investment associate, technologically you have millions of them at once, and if you have the ability to control their hallucinations in their errors by having a rigorous statistical backdrop, you could do a tremendous amount at a rapid rate. And that's that's really what we're doing in our lab and proving out that that process can work. I see.

Speaker 1

So, so is the idea that AI could possibly generate theses or ideas that can then be rigorously, you know, statistically fact checked by either the humans or you know, existing algorithms and data sets. Is that the idea?

Speaker 3

Yeah? And then yes, and but the idea goes further, But yes, that's the start. Language models could do that. Statistical AI can then take theories and generate whether like those have at least been true in the past, and what the flaws with them are and refine them, offer suggestions on how to do them differently, which then you

could dialogue with. So then the other strength of language model has that that humans are weaker at is now take a complex statistical model and talk about what it's doing, and there's ways to train language models to do that. That then allow sort of a judgment to say, okay, now let's think about what's happening here and reason over what's happening. So you use the way we've modeled this kind of out as language models can come up with

potential theories. Now there's a limit to that. It's not the most creative thing in the world, although it's met theory at scale for sure. And then there's and again that's language models with good you know, you got to tune your language models in a certain way so it's not straight out of the box. But then you can

use statistical things to control that. Then you can use language models again to take what's coming out of that statistical engine and talk about it with a human or other machine learning agents, and we kind of report back on what you're finding and what that is and the types of theories that are out there that might run contrary to what you believe, which can lead to more tests and and other thing. So that's the loop that

you know that I'm very excited about. And as I said, up until the thing that statistical AI was limited because it was focused on the data of markets, where language models the good thing is it has a much better sense of something that a statistco model wouldn't really have. Statistical model markets doesn't get the concept of greed. Language models pretty much understand the concept of greed. They've read everything that's ever been written about greed and fear and whatever.

So now it can start to think about statistical results in the context of the human condition that generates those results. Big deal and really a radical difference.

Speaker 2

Let me ask you one very simple question, and it might be one that speaks to an anxiety of listeners. If already GPT can perform at maybe the type of level that high quality first year or second year associator Analystic Bridgewater can do, does it mean fewer highers in the future humans being hired at Bridgewater or does it mean the same number or more humans doing even more? Like do use is it a replacement? Like what does it mean for like the type of person that would

have been the ten years ago? First your employee at Bridgewater.

Speaker 3

What I think people should expect at bridgeworder but and just generally at bridgeworker in a hurry is things are changing quick that it really requires people to be capable of playing whatever role is necessary in order to do that. Right, Like if you go back at the clock at Bridgewater when I started or just before that, right, we were you know, we were using egg time, Like we had rules on how to trade, but we were using egg timers and humans to like do these things. And over

time computers could do more and more of that. We kind of got to this point where it was i'd say, kind of humans settled into the role of intuition and idea generation, and we use computers for memory and for constantly running those rules accurate, et cetera. That was a transition half like something it got to fifty to fifty

technology and people. And now this is another leap, right, And it's definitely true that it's going to change the roles that investment associates play now exactly how and you still need the for as far foreseeable future. You're going to want people around that out that working on those things. There's edges that these techniques I'm describing certainly won't do well for an extented period of time, and there's how to build the ecosystem of these machine learning agents, et cetera.

And so what I've found is certainly the people in the lab. You want people who are curious about these new technologies, you want with to utilize them, and that's that's going to be really part of the future of work.

I think. I think it's going to be very hard in any knowledge industry to not utilize these And we're seeing this huge breakthrough encoding, right that is so democratizing in a sense that you don't you really need to know what you want to code more than you need to know coding, you know, And that's a big breakthrough. So a bunch of people that weren't as well trained or as capable in C plus plus or in Python or whatever can suddenly get what they want so much faster.

So all of a sudden, the skill sets are changing, and they're changing in ways that I think are as surprise to many because it's actually a lot of the knowledge work, a lot of the things where you content creating and whatever that that I think people thought would

be later in computer replacement that are happening faster. So the main thing is, i'd say, right now, there's so much in flux that having flexible the more you need flexible generalists who can have an eye towards this and eye towards the goal and be able to utilize whatever tools are necessary to get there. That's really where I think, you know, you're seeing a fair amount of change quickly.

Speaker 1

So you mentioned earlier that just the existence of machine learning can impact both the current environment and the future. So I think you said the future data points aren't going to look like the past data points simply because machine learning exists. Does that sort of reflexivity between machine learning slash AI and markets become more of an issue as AI and machine learning becomes more and more popular and more entrenched.

Speaker 3

Yeah, I think it's a big deal, right, And I think it's both something that's going to cause act and something I'm super excited about. Obviously, I'm excited about the power of this that I think there's ways to utilize it really well. And it'll also there will be a lot of mistakes. Like you're saying, there will be funds that will use GPD to pick stocks and not really deeply understanding what's happening and why or why what the weaknesses that might be there are already plenty of times

where statistical pure statistical because there's not enough data. You're not building with those fundamental issues in mind. You know, not that it was directly markets, but in the housing market, what Zilo did is a great example. Right. Zillo goes out and uses an AI technique that wasn't fit for purpose for when it's worth but they use an AI technique to predict housing prices and then go into the market to start buying houses that they think are undervalued, right,

And they have a couple problems. One is, while they had a ton of housing data, it was over a relatively short period of time. So even though they had tons what looked like tons of data points because they have the price of every house and everywhere or whatever, there's still a macro cycle that affects everything that was underestimated in what they did. And secondly, they underestimated what it would be like in theory versus in practice, whether

it's actually an adversarial market. Every time they won an auction, there was something about that particular lot that the other people bidding on that lot knew that they didn't, and so it ended up obviously being a huge problem for Zillo, and they kind of had a big impact on the real estate market and then a big failure. And that's the kind of thing you're going to see over and over again. If because the basic problem that the data that you're looking at isn't necessarily the data you'll face

in real world. You're not facing the adversarial problem when you're looking at that data the way they were. You're not a statistical technique that's very good at seasonality and trend following might not be very good at understanding macro cycles and so on. So that was another case where Zillow is a case and I think we'll see it over and over again where the recognition that it's not as simple as taking machine learning out of the pack

and applying it to this problem. Even when there's a ton of data, right some of the places where there is a lot more machine learning going on, very short term trading arguably is better for machine learning because there's a lot of data and you can learn faster over

that data, and there's some merit to that. And in terms of tangible places this is now years ago, But where we started applying some of these techniques were in things like monitoring our transaction costs and looking for patterns and shorter term data because there's a lot more data. But on the other hand, the data often it's like having the data of your heart rate for your whole life. You could feel like, wow, this is a yeah, I've got every heartbeat for you know, you know, forty nine years.

That seems like a lot of data, but its not. It's totally irrelevant when you've artitacked. So that even when there's lots of data can be misleading. And that those are those are the types of issues that will lead to these techniques having huge problems, which means it's not out of the box AI is going to solve all

these problems. You really, and this comes back to you have to understand the tools, what they're good at, what they're bad at, and put them together in a way that use what they're good at and protects them from what they're bad at. Now, nothing, no process work coming up with will do that perfectly. But the more and more you could do that, I think, the more and more you could become, let's say, better than humans at that, because humans have many of those fallibilities or versions of

those abilities that these processes will have. And that's like that'll be the question of how far we can how far we could take that and how how much human judgment is better than those things, which is stuff you know we'll be experimenting with as we as we go along.

Speaker 2

So you know, one thing that you know, your founder Ray Dalio years ago, like sort of he wrote down

a set of rules. You've talked about this before. He wrote down a set of rules about how he understood the sort of the machine of the markets to work, and one of the issues with AI, and I think you're sort of been hit getting at this is that like AI legibility and the understanding of like, okay, you put in that you pose a query to a large language model, it creates some output you don't really know what it did to get there, and so that's you know,

that's sort of different than dealing with a human analyst. Do you get say, well, what did you think about that? Did you think about that? Can you talk a little bit more about like the sort of I don't know if that's a weakness or how do you sort of get around the fact that, like it's still difficult to query an AI model and say like how did you arrive at X or Y conclusion?

Speaker 3

Yeah, and I think that's really important and that we but also something that's more and more breakable because even with humans. One of the things like one of the places where I think there are a lot of areas where Bridgewater has a strength, right, Bridgewater has a strength And we never went from a statistical model, So we built data based on what we needed for reasoning, and as a result, we have a better, longer, cleaner database

than I think anybody has. We've been thinking through this problem that you're referring, which is how do you actually get out what somebody means? You'd be surprised how hard it is to truly get from a human. Humans don't actually know why their synapses do what they do. They actually like when you ask somebody to describe something, you get some partial version of what they're thinking. If you took like an intuitive trader and you start peeling back

all the reasons, that's very hard. We've been doing that for a long time and have an expertise in doing it, and I would say that humans don't even know what they're doing often, but there are ways to you know, like you're saying, query and force questions and what about this and what about that? That will help pull out human intuition. And what you find with machine learning algorithms if you get good at this and this is you know,

going back to two thousand and sixty. Two thousand and seventy has been critical to my work is there's a way that you can query machine learning algorithms like query like it's different, but the concepts the same as how you query humans to get at why they really believe what they believe. And as I was saying I think there's actually elements of large language models interpreting what statistical

AI is doing that allows that process to accelerate. And I think it's very critical you really want to know because that's the way you find the flaws. If you go back to my go example and you say you can think about if you can querry a model and think about what it's done and what it hasn't done, then you can figure out what data is missing, right, and you need to set up adversarial techniques in order

to keep querying an algorithm for what it's doing. And again, I think that's still an area of research, but a process that's moving along quickly to basically get to the point where the standard is even though a machine learning technique might be doing something very different than a human is that it can still explain itself, and it might not perfectly explain itself, just like humans don't perfectly explain themselves, but to a very high degree of confidence across a

wide range of outcomes that you have a sense of what's going on is possible. And that's the you know, that's part of the design of what we're putting in, which is, well, how do you query it, how do you give it more information, remove information, etc. See how it changes its mind to determine roughly what's going on.

Speaker 1

You know, you mentioned the data sets there, and I guess it's a cliche nowadays to say, well, a model is only as good as the data that it's trained on. But it's a cliche because it's true. Do you use your own internal data for the large language models or where are you actually pulling a data from? And then secondly, like, what type of data have you found so far? Is most useful for these types of projects?

Speaker 3

Well, I think the things that are most interesting to us A we're trying to learn things that we don't

already know. So we're being careful about what kind of Bridgewader knowledge we put in here, because it's not that helpful if we reinvent Bridgewater somewhat helpful, but it's about it as helpful as let's say, reinventing everything that we don't know about that other people have thought about, etc. And so point one in the lab right now, at least, we're focused on not making this through Bridgewader centric on purpose, because it's in that way learn things that we don't

already know and if you just fed a bridgeworder information, which we may well do, that could be a productivity enhancing thing, but you'll quickly produce something very similar to Bridgewater. Where what's been amazing so far as we're producing good results by Bridgewater standards, but different, very very different conclusions and different thoughts than what we have internally. So I think that's zero point one choice now on raw data and cleaning data and how you put together data. Now

we are benefiting from Bridgewater scale on that. That's been a big that's a big deal that over the years, again precisely because we took human intuition and said, what data do we need to replicate that intuition. We have a unique database where if everybody else is pulling from data stream Bloomberg, et cetera, we put together the data we needed to feed our intuitions. Oftentimes that data didn't exist. We had to figure out the way to create it.

And also we're big believers that you need to stress us across a very long period of time, so we have much longer data histories now. Those things are certainly valuable in a context of small data, any quantity of data, any like the understanding the data being able to therefore for a given theory find appropriate unoptimized data. Those are big deals and that that we are using and and you know that does allow us to move forward more

and on the land large language models. You know, there's still a lot of work to be done, but you certainly can train through reinforcement learning to you know, to make sure that they're not making mistakes that you know about. And so there's ways to to do that. Now we've been trying to avoid that for the reasons I was describing before, avoid doing too much of that of ejecting our own knowledge and use external sources to do that.

But that's still part of uh, you know, part of the tool set that will be available that yes, you could train it more directly on things you already believe to be true if you want to do that, and that certainly will lead to answers that replicate your thinking more quickly.

Speaker 1

So just on this point, one thing I wanted to get your opinion on is how good is AI at predicting big turning points or structural breaks in market regimes? Because I don't know about you, Joe, but one of the first things I did with chat GPT was I asked it to write, you know, a financial news article about inflation, just to see whether whether our jobs were in danger, and you could tell that it was trained

on not quite current data. It was talking about how inflation has been stubbornly low for many years and the FED is trying to get it to the two percent target. But how good is AI at predicting those regime changes? Because if you're running, you know, a macro fund, I imagine that's one of the important things that you need to do, is try to figure out when something is fundamentally changing in the market.

Speaker 3

Yeah, and I'd say terrible if you use it in the sense that you're using it right like that. It's a little bit like saying, well, how good are people at that? Well, people are pretty darn bad at that, right. That doesn't mean that there isn't a way where some people who could do such a thing right, So AI like it. It's hard to just think about AI as a thing or think of like, Okay, well, if I'm just gonna use chat Gypt for that, You're exactly right.

Chatgypt as it comes out of the box is only trained over to a certain history and it doesn't care like unless you know how to make it care. It doesn't care that it's you know, it's just to answer your question about inflation. Based on everything it's ever read about inflation, time isn't even that important unless you make time be very important to it and predicting, and so you have to know how to use the tools to generate the type of outcome that you're describing. So do

I think like AI out of the box will do that. No, absolutely not, It'll be awful at that. Are there ways to take what's embedded in AI to come up with a way to do that? I was embedded in language models, and if you combine that with statistical tools, yeah, there's a path there. But it's not going to be as simple as open up JATGPD and ask it that question.

It's it's a there's more involved. But if you basically it is helpful to have an analyst that's read everything that was ever produced, even if they stopped reading in twenty twenty two in twenty twenty one, I should say it's there's a way to use that, but you have to use it correctly and not misuse it in order to try to generate that answer.

Speaker 2

All right, So I can't just ask a large language model when will inflation get back to the Fed's target. But I'm speaking I'm not speaking to a large language model. I'm speaking to Cio Bridgewater. And you know, I do I am curious, you know, I do want to talk a little. We do want to talk a little macro and I, you know, before we sort of like, I'm not going to directly ask you when inflation will be

back to the Fed's target. But what strikes me about the last year and since the last time we talked, that's really blowing my mind is that rate hikes have been a lot faster than people expected. Inflation is hotter than people expect, did the unemployment rate is lower than people expected. What is it that people misunderstood a year ago about the economic machine? Such that the FED has hyped rates much faster than people expected, and yet it's

been surprisingly ineffective at cooling things down. And to this day there seems to be a surprising amount of economic momentum with FED funds at like five and a half percent.

Speaker 3

Yeah, it's a great question. I have a bunch of thoughts on it. You know, certainly I can't speak for all people, but I can speak for myself. I've been wrong about a bunch of a bunch of those things. So just to talk about what I certainly and let's

say we at Bridgewater didn't now like you're saying. I thought the degree of and certainly are everything that we had understood in our statistic models or whatever that we knew that we could easily be wrong, but that the degree of tightening was fast and high relative to history, and that any tightening like this in the past had led to significant downturns. Although the lead life is somewhat

variable and its still possible that's right. But I think a lot of things that happened different than I expected.

Was a Usually, when let's say, as they were last year, stocks were falling and short rates were rising, that formula in history always led to the personal savings rate rising, people seeing higher interest rates available to them, asset prices falling, housing slowing down, etc. Usually people save more money, which meant there was less revenue for companies, which meant there were layoffs, which meant savings rates rose more when the

employment market weakened, and you know, our recession was caused

through that mechanism. And what's happened in this period is that I think now I could be wrong, that normal let's say impact of the higher interest rate and wealth effect impact was offset by the fact that wealth had been changed so radically in the twenty twenty twenty one period by fiscal policy, and that we have fiscal policy as extreme as the war, and the ripple of the length to which that disrupted let's say, those other relationships

was interesting. The degree of it was interesting. I think there's ways we should have, you know, looking back now, I think there are reasons that we should have I should have known that, and some people were pointing to that, but that created much less of a reaction in household balance, in household savings rates as you normally did. You came out of the recession with better balance sheets than ever. People were willing to dissave. So even as rates climbed

and actually debt growth collapsed as it normally would. But what simultaneously collapsed outside of debt is let's say, increase, was the willingness to spend down the cash that that households had built up. And that cash doesn't just disappear when one person spends it, it goes on to others.

Balance sheets whether it's corporate balance sheets, other household balance sheets, and so that what's been happening, it appears, is that money's been spinning around in a way that made the rate hike have much less impact than I believe would have had pre COVID, if you had anything like that,

that rate hike. On top of that, within the US economy, in particular, corporates had extended their duration, So the impact is taking longer on the effect on corporates, although I think it's happening, but it is taking longer, and so there are a few other things. And then obviously the benefit of when nominal what did happen is rate rise is created a decline in nominal demand, but that's mostly shown up in inflation. So nominal demands fallen pretty much

as much as I've expected. It's been more inflation falling than real growth falling, which again I think there's reasons that that that's the case. But before there was this massive demand shock from the what the Fed, what the central banks and the Treasury had done to get everybody's balance sheets up, and supply was struggling to keep up with this massive demand shock, and now demand's falling. But supply is still catching up to that old level, so

in on net, real growth has come out stronger. Now I could see all that in the rear view mirror by anything predict that that would be the way it would play out. But I think that's why you've had this stubborn strengthen the economy and that, you know, and that's created a certain amount of stability. Now equities have rallied significantly since then, there's like some of the negative

wealth effects have eased. At the same time, though a lot of that excess cash that was on balance sheets have been distributed, so there's a mix of pressures here that looking forward, you know, we do think inflation is still coming down a bit, although on net we've entered what we think is a more inflationary environment, such that two percent inflation probably more likely to be more of a bottom than a cap. And we do think fiscal policy as the way to deal with the recessions is

probably the politically the more likely outcome. Then let's say moving back in the next recession to more que and fiscal policy is a lot more inflationary and effective in a sense of stimulating growth quickly as we've seen. So I think you're going to see a world where we are still adjusting to a higher inflation, world that's de globalizing. Although everything we're talking about on the productivity front, maybe machine learning changes that we'll see, but largely X a

major productivity miracle. I think deglobalization, the move towards fiscal policy has changed the long term inflation path in a way that markets haven't fully adjusted to. Because markets right now believe the fat is totally credible that inflation is going to return to target basically with very little problems. When we measure the pressures, we don't think, so we think it's going to be much more challenging to get

inflation where markets expected. The impact on earnings is going to be a lot more negative than the markets are currently expecting, and it's going to take longer and be harder. So big differences between what we're seeing and expecting and what the markets are currently priced.

Speaker 1

So I think last year you were talking about the possibility of a recession in twenty twenty three. Is that off the table now? So you're still positioned. It sounds like for a level of higher inflation, but it sounds like maybe you're a bit more optimistic on the growth front.

Speaker 3

Yeah, we've been wrong on growth, So I'd say, look, we think it's going to be a struggle. We're in a state of disequilibrium in the sense that relative to a given level of growth, we think the level of inflation to the bad target that they're going to have a difficulty achieving growth and inflation at the levels they want and are going to have to give on something in the short run. I think that's leading to you know, higher rates. The expectation that the massive easing's coming is unlikely.

The Fed's going to continue to have to be tighter longer than the markets expected. So that's bad for you know, let's say bonds and long dated short rates. It's also probably bad for equities. And at the same time, we think growth will be struggling. It's nominal growth slowing. Penomenal growth is going to continue to slow, and as nominal growth slows, while you're more in stick your inflation, things like wage growth and some of the service areas more

sticky inflation, you get more of a challenge. It's nominal growth falls for it to just flow through to inflation. So my views, you end up with growth disappointing a bit, and inflation disappointing on the high side a bit ending up you know, probably bad for bonds and probably you know a little bit bad for equities, and generally weak, weak growth, and if that weak growth starts to translate into rising savings rate, you could easily end up into it into a recession, and one that's going to be

difficult to deal with, you know. But yeah, I'd say we've teamed. I've tamed, and we've tamed a bridgewater some degree. Our view on growth, while still negative, not as extreme as it appeared, and and it's a more gradual process that's unfolding. And then on the inflation front, while we've had a week I did a quick decline inflation as novel GDP foul. We do think we're in the range where you're in the much more stubborn part of inflation.

It's be harder to continue to get those inflation falls going forward.

Speaker 2

So just to be clear, though, you do think there is a gap between either what the market sees in terms of how much more work the FED is going to have to do or what the FED thinks how much more work the Fed is going to have to do, and what basically you think the FED is going to have to do if it actually is serious about getting inflation back to something resembling its target.

Speaker 3

Yeah, I think so. I mean, I'd say the FED seems a little bit more realistic than the markets do on what it's going to take. But right that, we think that's right that when you look at what the markets are saying, that it's super optimistic, it could come true. You do need essentially to get an equity rally from here, you have to have lower rates fairly quickly into a world where earnings are pretty good. That's kind of the discounted line. To get above that, you need even more

than that. And I think that line is super optimistic relative to what we're you know, what we measure and again are I'm using the words, but I'm describing the process that's based on studying, you know, hundreds of years of economic history and how these linkages work and building

all of that into a systematic process. But just spitting out kind of the output of that is that it doesn't appear that you'll that the FED will be able to achieve that, and that we're in this disequilibrium where you still have more inflation relative to growth, and you don't have an easy way to close that gap. So we'll see we've been wrong about that in terms of at least what the market outcomes have been for the last six months or so, after having been incredibly right

for an extended period of time. And that's part of it. We get a lot of things wrong, and that's normal. But I think when you break down why we got it wrong and the ways in which that you know, we've learned from that, and the ways in which our processes have taken in new information, still leads to this this view that that the markets are overly optimistic about how easy that's going to be.

Speaker 1

All right, Well, Greg, we appreciate you coming on and outlining your thought process both around the markets and AI and how you're actually deploying this new technology. So really appreciate it. Thanks for coming back on the show.

Speaker 3

My pleasure, good to talk to you.

Speaker 2

Good luck in Vegas. Yeah, bring home another bracelet.

Speaker 3

We'll try.

Speaker 2

Thanks Greg. That was great.

Speaker 1

So Joe, I feel like I have a slightly better conception of exactly how this kind of technology can be used for investing. So the idea of maybe you have the AI models come up with the cs or ideas that could then be rigorously fact checked because all they the eyes are hallucinating and things like that. That makes some sense.

Speaker 2

Yes, absolutely, And I think you know you asked the question it's like, can AI do our jobs? And I don't think the answer is yes. And I think it's like can AI replace the stock picker? It doesn't sound like the AI is yes, But like, can the AI augment augment the way someone's thinking, test come up with

theories that then can be rapidly tested. Have that sort of go back and forth and sort of do some of the work that you currently sort of like junior analysts do in terms of like theory testing ideas and stuff like that. You could see how it could be a force multiplier at at a large fund.

Speaker 1

Yeah, but I mean to that sort of turning point question that also seems to be maybe the big weakness here is that if you have an algorithm or a model that's been trained on years and years of prior data, so rates going lower and lower, in inflation staying below two percent seems very difficult to project what might change.

Speaker 2

Which, to Greg's point, humans aren't very good at that either. But you would hope, like, right, like, that's what we won't want to just be able to ask ch GPT or whatever. You know, I'm using that as like a stand in for.

Speaker 1

This, Yeah, or maybe maybe you ask Ai, like what would you need to see in order to start taking the prospect of regime change seriously?

Speaker 2

Yeah, I like, I mean you talk about this idea of like the sort of like adversarial way of thinking about it, which I think is really important. And you pointed out the sort of like disaster of the how the home eye buyers and then they got adversely selected because it's like, well, if Zillo is in the market, we know they're going to overpay, and so everyone suddenly dumps all the homes on Zillo and it was not

anticipating its own role in the market. In response to your question, which I think is like a really interesting dimension to all.

Speaker 1

Of this, Yeah, that's sort of reflexivity between the models and the markets. I think we're probably going to be hearing a lot more about in the future. On that note, shall we leave it there?

Speaker 2

Let's leave it there, all right?

Speaker 1

This has been another episode of the Odd Thoughts podcast. I'm Tracy Alloway. You can follow me on Twitter at Tracy Alloway.

Speaker 2

And I'm Joe Wisenthal. You can follow me on Twitter at the Stalwart. Follow our producers on Twitter Carmen Rodriguez at Carmen Arman and Dashel Bennett at Dashbot. Follow all of the Bloomberg podcasts under the handle at podcasts. And for more odd Lots content, go to Bloomberg dot com slash odd Lots, where we have transcripts, a blog, and a newsletter, and for even more. If you want to chat with fellow listeners about all these topics, there's even

an AI channel in there. Check out our discord twenty four to seven people talk about all these things Discord dot gg slash odd lots.

Speaker 1

And if you enjoy odd Lots. If you like these conversations, please leave us a review, a positive review please on your favorite podcast platform. We'd really appreciate it. Thanks for listening in

Speaker 3

In

Transcript source: Provided by creator in RSS feed: download file