SI351: The Two Tribes of Trend ft. Richard Brennan - podcast episode cover

SI351: The Two Tribes of Trend ft. Richard Brennan

Jun 07, 20251 hr 9 min
--:--
--:--
Listen in podcast apps:
Metacast
Spotify
Youtube
RSS

Episode description

As trend followers push through one of the toughest environments in years, Rich and Niels unpack what’s actually driving the pain. They draw a sharp line between two camps in the industry - those who focus on standalone market behavior and those who manage trend as a portfolio-level phenomenon, and explain why that distinction now matters more than ever. From the ripple effects of tariff policy to the quiet fragility in institutional positioning, this episode traces how structural shifts are colliding with legacy assumptions. Also on the table: AI’s growing role in model development, the hidden costs of bad data, and what it really takes to build a resilient, scalable process in a world where noise is cheap and conviction is rare.

-----

50 YEARS OF TREND FOLLOWING BOOK AND BEHIND-THE-SCENES VIDEO FOR ACCREDITED INVESTORS - CLICK HERE

-----


Follow Niels on Twitter, LinkedIn, YouTube or via the TTU website.

IT’s TRUE ? – most CIO’s read 50+ books each year – get your FREE copy of the Ultimate Guide to the Best Investment Books ever written here.

And you can get a free copy of my latest book “Ten Reasons to Add Trend Following to Your Portfoliohere.

Learn more about the Trend Barometer here.

Send your questions to [email protected]

And please share this episode with a like-minded friend and leave an honest Rating & Review on iTunes or Spotify so more people can discover the podcast.

Follow Rich on Twitter.

Episode TimeStamps:

00:13 - Introduction to Systematic Investing

03:55 - The Shift Towards AI in Trading

11:25 - The Rise of Idiosyncratic Trend Following Post-2020

21:42 - The Impact of Recent Market Dynamics

25:39 - The Rise of Retail Trading in Futures

34:09 - The Panama Method of Historical Adjustments

42:32 - Understanding Backtesting and Data Integrity in Trading Models

47:36 - Navigating Capital Constraints in Trend Following

55:14 - Understanding AI in Trading Strategies

01:00:14 - The Challenges of AI in Trading and Strategy Development

01:04:44 - The Impact of AI on Humanity

Copyright © 2024 – CMC AG – All Rights...

Transcript

You're about to join Niels Kaastrup-Larsen on a raw and honest journey into the world of systematic investing and learn about the most dependable and consistent, yet often overlooked investment strategy. Welcome to the Systematic Investor Series. Welcome and welcome back to this week's edition of the Systematic Investor series with Richard Brennan and I, Niels Kaastrup-Larsen, and where you each week we take the pulse of the global market through the lens of a rules based investor.

Let me also say a warm welcome. If today is the first time you're joining us and if someone who cares about you and your portfolio recommended that you tune into the podcast, I would like to say a big thank you for sharing this episode with friends and colleagues. It really does mean a lot to us. Rich, it is wonderful as ever to be back with you this week. How are things down under? Very good. Niels, I'm looking at our live video feed which unfortunately our audience don't see.

And seeing you in a T shirt and me in a jumper that's reflecting the it's getting a bit chilly down here and it's obviously summer over there. Exactly, exactly. Although so far here in Switzerland it's been pretty wet summer, so. So running around in T shirts is not for everyday use. I will say we've got some really interesting, a little bit different than usual topics lined up, you know, courtesy of you of course, which I think people will really enjoy.

And you know, of course we also have our little segment which might be a little bit longer today in terms of what's been on your radar. Now my topic actually I'm going to be very selfish here because I know your topic will take us into a little bit longer discussion. So I just want to give you two things that came across my radar and actually both of them this morning when I was just looking at the news and they're both stories from the Financial Times.

The first story I came across is kind of a reminder of how obsessed the world have become in terms of measuring our own activities. You know, we measure how many steps we take every day, we measure how many hours of sleep we get and what kind of sleep we get.

We do all of that and for a few years I guess you saw people talk about these apps that would measure how I guess efficient you were working, how much time you were spending in word, how much time you were spending in Excel or other apps etc, etc. And then today I saw this article.

This is a 120 year old Japanese stationary company and they have launched a self declared micro motivation device and it's a gizmo that you attach to your pen or your pencil and now it measures how much handwriting you do every day. And then apparently it is associated, I guess with some kind of database because then it basically shows you how you are progressing against other users of this gadget.

And I thought that just really shows you how obsessed and how there are no limits to what we want to self quantify. I would fail in that situation, Niels, because I hardly lift up a pen or pencil these days. It's all computer driven. That was the other surprise to me. It wouldn't get much from me either, that's for sure. Anyways, so I thought it was quite fun.

The other thing that it's not specifically fun but it is interesting and it is relevant to our conversation later today and that is that aqr, you know, our friends over at aqr, they apparently, this is from the article I took that apparently kind of resisted the use of AI.

And Cliff Essnes, I think has been, you know, saying, talking about this on different interviews over the years, but it came out today that now they are embracing AI and machine learning techniques for trading decisions and ending years of resistance to that. It talks about how the fund now uses machine learning algorithms to identify market patterns on which to place bets. Even if some, in some cases, it's not entirely clear why those patterns have developed. I thought, interesting.

Not exactly how we as trend followers think we want to understand and know exactly why we're long or exactly why we're short. And I'll just say this and then we can kind of braze into your topic because I do think this is important. The article mentioned that the shift has so far, it's obviously a short time period, improved returns. But he also acknowledges, Cliff, that there are some drawbacks and that is how do you explain poor performance to investors?

And it even says here panicking investors when you don't know why it's going wrong. Which leads us, I think very nicely into your what's been on your radar. But I do think this is important because obviously our industry is having a difficult time at the moment. Yes, well, AQI had better get a good reason for why the industry is going into their drawdowns because we're deep in our drawdowns at the moment.

But I suppose what's been on my radar, Niels, is it's inevitable that you focus on these drawdowns when you're in them. That's just the nature of the game. And you know, hopefully with our trend following process. It's not too long before we're hitting high watermarks. But you know, April was particularly cruel for us and May hasn't offered much respite. A bit of a decline in the drawdown exposure, but still there. And we're a ways off our high watermark.

But I think the Soctin trend index actually just touched, or maybe it exceeded slightly the worst drawdown in its history based on monthly data. Yes, and we're not alone. I think most trend followers are in their drawdown. And as you've mentioned in prior podcasts I've listened to you, you mentioned that some of the long established trend followers are in their deepest drawdown right as we speak, you know, around this time as well.

So it's inevitable that these periods occur with the nature of our models. But you know, what I'd like to bring up is the difference between those that focus on what I call idiosyncratic trend and those that what I'm regarding as correlation conscious trend followers.

In other words, I'm seeing today in the trend following industry these two extremes that exist, what I call those that focus on idiosyncratic trend, they're what I'm referring as those that treat each individual market is independent and they're looking at time series momentum in an individual market. That's what predominantly their models are seeking to address. And then I'm calling the correlation conscious segment, those that treat trend following more in a cross sectional viewpoint.

They're looking at groupings, asset classes, complexes, things that are trending. And I see this bifurcation in trend following model occur basically in the early 2000s. So I see what I regard as what I'd turn the idiosyncratic trend follower being the type of trend follower that existed in the 70s, the 80s, the 90s, et cetera. The more traditional trend following approach cut losses short, let profits run, focusing on individual markets.

But then I see in the early 2000s the desire by a lot of the trend followers that sought large aum they needed to develop a technique that smoothed out the volatility profile of process that inherently has volatility in it. The traditional trend following style, we all know that that carries significant volatility in it, but that volatility is a two way volatility.

There's both downside volatility, but there was also upside volatility when they managed to capture these significant outliers, ride them to the end until the trend ended. And so there was this both upside and downside volatility. But in the early 2000s there was this attempt by the trend following those that wanted to step into the more institutional space, get more investors accepting the nature of our process, they reduce their volatility. There was a significant attempt to reduce volatility.

And in that process they looked at how do they reduce volatility. And this is where we got this bifurcation of industry, where we started looking at this cross sectional viewpoint. Ways to reduce portfolio correlations, ways to reduce volatility associated with correlated markets, ways to target volatility, all these different types. So there was this bifurcation and those that addressed the smoother volatile profile attracted more AUM because it was much more acceptable to investors.

They still had a component of the trend principle, but also at far lower volatility, you know, maybe 10 to 12% volatility, as opposed to the old days of 25% being acceptable volatility on an annual basis, it was now down to 10 to 12%. But so there was this bifurcation of the trend following community to two extremes. Not everyone, of course, a lot of trend followers were somewhere placed within those extremes. But there were two extreme spectral ends of the trend following community.

And so I particularly saw post gfc, what I call a correlation conscious trend following. Those concerned with cross sectional viewpoints of trend were eminently successful post 2008. And I attributed that to a particular regime that was in place in globalization, central bank coordination. All of these effects meant that the correlations between markets became much more linked, causally correlated between each other because of the globally coordinated central bank bank policies, et cetera.

So this occurred up until 2020. And I saw that the old traditional idiosyncratic trend follower lost a degree of favoritism. And we got this boom in these very large managers who are aggressively getting much more aum. They had smoother volatility profiles now very successful over that period. But then I saw post 2020 an interesting change. I saw that the rise again of the idiosyncratic trin follower from 2020 up to 2024.

And I saw a reduced performance of what I call a correlation conscious trend follower over that period. The large AUM heavyweights that were using very complex quant models, et cetera, to reduce that volatility they suffered during that period of deglobalization, when the regime that existed up until 2020 suddenly switched gears and we went from globalization to de globalization from that period on. And we saw this loss of this central bank coordination. We saw fragmentation occur in politics.

We saw different geographies, different assets, all sort of expressing a Less correlated structure and we got the rise of the idiosyncratic, the cocoa, the soybeans, the gold, all of these things idiosyncratic trends emerged in this more fragmented environment. But then I see this impact in April where all trend followers have been significantly affected and I'm associating that with the tariff policies.

I'm associating that with the fact that these tariff policies were so widespread in their impact. It was right across the spectrum of the markets that we trade. It wasn't restricted to equities, it flowed through into commodities, canola, all of these things, the currencies, the yen, all of these things were all affected. China, it was such a global impact that I saw this massive whipsaw events which affected all of US trend followers in April. And it's a very difficult time.

But I don't think that this regime of that we've experienced from April can be expected to last for a very long time because there must be some direction soon that will emerge from the current uncertainty we're seeing in the market. So I'm hoping, and I'm crossing my fingers Niels that all trend followers will start blossoming shortly. What one says clearer direction but I don't think that the environment will we're in is going to be sustained for too long.

No, I mean I completely agree with you and I would add to that it's not in a sense just kind of the terrorist policies that at some point will resolve itself and it'll either be very bad for the economy or maybe it'll be good for the economy, who knows. I also see a lot of other policies that are being the new massive increase in debt are all, all across the world, right? Whether it's the US or Europe, whatever because of this changing world we're living in.

And at some point these things are going to break. We've seen kind of cracks a little bit in Japan and in the US with some bond auctions that didn't go so well. And, and what happens in, in, in this world if, if 10 year interest rates settle above 5% and suddenly 5% is kind of your low point rather than zero, which it was, you know, only a few years ago. It's hard to, hard to imagine that it was only 50 basis points on the US 10 year, five years ago or so.

So I think the whole system, the whole economy, the way and actually unfortunately also how people are and I say people but what I really mean is how institutions, pension funds, insurance companies, all these big companies, investors, how they have built their Portfolios because I think unfortunately we're going to find that they have built it for the old regime and not for the new regime as you, as you're mentioning.

And of course I completely agree with you that in, in that regime, once the dust settles and the breaks or the cracks occur, those are the times where trends can really emerge and have the follow through that we haven't seen in the last 12 months or so. So in many ways you could say the length of the duration of drawdown. I agree with you that for some managers, although a select group, still the drawdown is not that long, a few months only really 2025 has been including yourself, by the way.

Congratulations on that. So it has not been that long for a lot. And I would say for most people though, it's a drawdown that's now running at 12, 14 months. But that in itself is not unusual, frankly. And we all know, I mean it's probably only going to take two or three months if we have really strong follow through in a number of sectors to see that come back.

And the other thing I've noticed about this current drawdown, because every drawdown feels a little bit different, even though we've gone through it now so many times, at least in my 35 years doing this, it feels a little bit different because what's been unique about the last 12, 14 months is that all three big financial sectors, currencies, bonds and equities, they have all had significant reversals, but also multiple reversals. That is a little bit unusual. And I also wonder why.

Maybe this is the reason why some of the managers that you mentioned, some of the big, well established with great research teams and all of that stuff, that they are seeing their worst drawdowns now because the size have meant that they had to tilt the risk towards financial markets rather than commodities. And so they haven't had the full diversification benefit maybe compared to managers who are not at that size, where commodities can still be a meaningful allocation of risk.

So it's a little bit of a compound. You need the financials because you want big aum, but from diversification point of view, you don't really want only financials. You want the commodities, which is something we've also talked about this idea about, you know, trading hundreds of markets versus trading maybe 70 or 80 that are less correlated and maybe a better bet for your portfolio than just trading, you know, hundreds of markets which eventually will be much highly correlated.

Yeah, I think there are ways to accommodate greater diversification without necessarily extending too far into market diversification. Because my concern is that the more markets that you're applying, you're inevitably losing a degree of the uncorrelated properties in those markets because so many markets are highly correlated. So I'd prefer to mix the system ensemble idea with market diversification so you can significantly extend your diversification.

But structurally insert what I call uncorrelated properties into those return streams. So you might have 500, 600 return streams comprising say 60, 70 markets, but 10 uncorrelated trend following systems to create 600 return streams. That rather than working on the fickle nature of correlations in markets which change on a dime, politics, all of these things can change correlation properties quickly. You want to insert what are called structural uncorrelated properties into it.

That's a more guaranteed way of ensuring that your return streams remain uncorrelated, allowing you to have this much more greater diversified exposure. But that's my preference. But I know others would argue that. Yeah, yeah, no, and you've articulated that many, many times over the years on the podcast. And I completely agree.

I mean I think that's definitely a, an underappreciated form of diversification because most managers use one or two different types of trend following and then they have multiple time frames of course, but, but you always advocated having multiple different types of trend following approaches and I really like that, that, that thought process.

Of course, as of last night my Trend Barometer stood at 36, so still a little bit weak and therefore we are still seeing some challenging environment for trend following as we've just talked about. Although as of Wednesday evening, the CTAs are having a good start to the month of June. The B top 50 is up 48 basis points, down only 4% for the year now. SOC gen CTA index also up about 43 basis points basis points, it's down about 8% for the year.

SOC gen trend a little bit less, up 32 basis points, still down 11% for the year. And the Short Term Traders index is down 30 basis points and down 3% so far this year. MSCI World well, equities definitely don't need a good reason to rally because they continue to rally up 1.1% now, up 6.3% year to date. The US Aggregate Bond Index from the S and p, it's up 25 basis points as of last night, up 2.68 for the year. And the S&P 500 total return index up another 1% and up about 2% so far this year.

What's been interesting to me as well. And of course this is kind of looking a little bit under the hood, but I'm sure many managers would have experienced that. And it relates to this idea that we've seen a big number of reversals in the larger sectors of our portfolios. And I imagine that actually the risk budgeting has been quite dynamic the last couple of months, meaning that we've actually seen big changes in the overall risk of CTAs. Usually our risk budgeting is kind of gradual.

Over time the new trends emerge and some trends stop and all that. I think the last couple of months we've seen a lot more kind of dynamic risk taking because of the way the markets have moved. Just something to throw in there. And finally, before we launch into your topics, Rich, it is interesting. I want to give a shout out to our friend of the podcast, Cameron, who's the founder and editor of the Hedge Nordic magazine, and to his team.

They just published a new issue called of the Systematic Strategies publication that they do. And I do find it interesting and there's some good people, friends of the podcast that have written articles there. But in his own editorial of the of the magazine, he writes about, you know, what if the rules changed? And he goes on to talk about Liberation Day and the tariffs and how that's just meant that markets have moved so violently and we've really been whipsawed.

And of course he's raising the question that I think a lot of investors, either in their mind, but also directly to managers like us, will ask, you know, has things changed? Is trend following dead and will it ever recover from this? Is this the new normal, whatever we call it? And I guess even I think I can speak for you as well when I say this is not the first time we've heard these arguments that the markets have changed or whatever.

But of course it's not something that I personally believe in. I think that we will find, as I said, a breakout in terms of whether it's because of reckless policies, but at some point things will break and you're going to get the follow through in the trends and for good and bad, but certainly trends that as a long, short manager you can capture. And that's the whole concept.

And you know why I believe certainly that maybe trend following is more relevant today than it's ever been because there's no logic in many of the things that is happening in the world, let alone what's happening in the areas of conflict in the world. For sure. All right, that was a little bit of a rant Rich, you brought some really interesting topics.

We're actually going to pull back the curtain on something that every single futures trader, not just CTAs, but everyone that trades futures, needs to get right, but many people completely overlook. And that's the data.

Not just from where you get your data, but also how you structure it, how you extend it and ultimately of course, how you use it to build a serious futures portfolio that can hold their own at an institutional level and even if you're starting with relatively low capital yourself. So we're going to be looking at things as to why does my backtech look great, but as soon as I start trading live, it looks completely different, right?

We're going to be talking about how to build real diversification even if you have limited capital. And we're going to be talking about, you know, what, what how do you handle if you want to trade a product. And quite a few have come out in recent years and where it doesn't have much of a history. So, you know, if these are some of the things you're thinking about, the next section segment of our conversation will, will connect the dots. So let's tackle this data dilemma.

And also you're going to be showing us sort of how, you know, poorly structured data can contaminate everything essentially. Not just your signals or your risk management, but even your, your self confidence in that sense. So without further ado, and I also said, I teased another topic we, we may or may not get to today with all of this ahead of us. Why don't you just take over, Rich, essentially the floor is yours. Thanks, Niels.

So look, before I get into the story of data and its crucial nature to a trader, I'd just like to talk about something that's been happening in an industry that you don't hear about much. Now we do know about the democratization of the investment field, you know, the rise of ETFs, the rise of the replicators, the return stack models. You know, a lot of people are becoming trend followers for the investor.

But what I'd also like to talk about is this democratization we've also seen in the trading world, the retail trading world particularly, because something that doesn't get much mention is this rise and rise of the ability of a retail trader to not simply trade trends with training wheels, but to now have the ability to access institutional grade software and the ability to trade very small contracts of futures that now change the entire game space for the retail trader who wants to become a

trend follower. So we often talk about well, those that don't have the time should approach the, you know, the investment world where they look at the best of class in industry and they invest in them and the allocation models we talk about. But for those that want to do it yourself, it's progressively become more and more democratized now that the term dumb money in the retail world is no longer the case.

I see now the ability with these institutional grade software and the ability to trade a diversified suite of futures contracts. But small futures contracts gives you the ability to produce something that is very similar to what the institutional models are developing. So there are these rapid changes that are going on in the marketplace, but particularly things such as now we've got access to what we call mini contracts, micro contracts, and in some cases what we call nano contracts.

So previously the game of futures trading was restricted to those that traded standard contracts. And that therefore meant that you need a very high levels of capital to participate in this game. If you're going to be doing it yourself, you might need 1 million or 2 million dollars to achieve the necessary level of diversification to deploy our philosophy of trend following with it.

But now with the emergence of these smaller scale products, minis, micros, nanos, and that's just building and building and building. For instance, a standard, a standard contract trades in one lot. A mini contract typically trades 1/10 of a standard lot. A micro contract trades up to 1/100th of a standard lot. But a Nano contract trades 1/1000th of a lot. So with nano contracts, micro contracts, it now allows retail traders to achieve the necessary diversified exposure.

And with the institutional grade software, and I'm talking not about the standard software that we hear about traditionally, the trading blocks, the Amibroker, the Ninja Trader, the TradeStation. I'm talking about things like QuantConnect and StrategyQuantX. These are institutional grade platforms that have incredible degrees of data analysis in it, robustness in it, the ability to robustly test your models. This goes well beyond what the traditional backtest software used to do.

And now it's integrating not only the backtest process into workflows, into how to achieve backtesting that parallels with live execution feeds automatically into brokers such as interactive brokers, et cetera. We are now at a position now where the retail trader, the person in their rugby shorts and their T shirts sitting at their home with their high grade computers, can now become professional level trend followers.

So quick question on that Joe, because I think the nano one has kind of escaped my attention that we now are down to nano, quick Question. I don't know if you know this. Do brokers exchanges charge the same fee for a Nano as that? Because if they do, people do have to be conscious of that, that there is a difference paying, you know, $5 commission each way on a normal standard futures contract and paying $5 each way on a Nano.

If you have the same number of round turns, it does make an impact on your performance, I imagine, on a small. Yeah, there's a democratization as well in the fee structure of these things. So what we're finding now is that a lot of these are just, you know, you know, percentage fee allocations that are similar to what you'll get with your standard, but it's a percentage allocation in relation to the lot size. Good to hear.

So, but the key takeaway here, Neil, is this level of democratisation is not just in the investment sphere, it's now in the trading sphere as well. But the thing is, you know, I believe that what's not gonna go away is trends, but what is gonna change is the participants in the mix. So we are going to see a significant change in the nature of the traditional trend follower and who are the new emerging trend followers.

So I'm seeing while the trends might not change, the nature of participation is going to change. And the reason I see that trend following is here to stay is because, of course I believe in complex adaptive systems and trends are an inherent nature of complex adaptive systems. So anyone who tells me, oh, is the trend dead? I sort of. I don't have that understanding, in my vernacular.

So moving on from this ability now of this democratisation in the retail trading space, I'd like now to discuss things that the retail trader needs to do to be able to allow them to competently trade like a trend follower, but with these different products and with these different platforms. So the first thing I want to talk about is this data dilemma, getting futures data right. So of course bad data leads to bad systems and this is how you need to address your data.

So futures contracts, as we all know, expire and roll. So raw data isn't continuous. If you backtest the raw data contracts, the gaps between the contracts are illusions and result in false signals if you don't adjust them. You need to stitch together these contracts to remove these false illusions, the breaks that occur between contract series.

So if you are trading these unadjusted futures contracts, one contract and then another contract and then another contract, you're effectively back testing on noise with these artificial jumps that occur between the contract series. So you must know how to back adjust your data. And what I'd be recommending here is it's important not to rely on continuous data adjusted by someone else.

You need to understand how to do it yourself because there's a lot of implications in the back adjusting of the data. You need to know to ensure that your backtest models mimic how you're going to be live trading those particular mini contracts, micro contracts, nano lots, etc. You need to know how you've stitched together that data in your backtesting process to make sure there's a seamless flow between backtesting and live execution.

So I prefer what we call the Panama method of historical adjustments. So just to explain that the Panama method of historical adjustment, it preserves the structure of the current contract and it just adjusts prior contracts to the current contract. So what that means is that if you have been back testing on Panama adjusted data, it means that when you're up to the current contract, you are in line with the current price data of the live contract that's in place.

So the price of the front contract remains as is, but you adjust the prior contracts to that every time you've got a new contract emerging. So the way we do that is first you need to define a consistent roll trigger. In other words, do you use open interest in the next contract or do you use volume or do you use the calendar to decide when to roll?

You need to know when you're going to roll to then observe the point at which you roll the price at that point at which you roll to the next contract you're rolling into and observe that price difference. Once you observe that price difference, you historically from all of the prior contracts you have in place. And that's what we call a Panama method.

So what that means is that the current contract is in place, but all of your historical contracts, every time a new contract comes in place, there's this adjustment, another adjustment, another adjustment.

So the thing is, your backtest data continually needs to be readjusted continuously, which means that when you are undertaking, you really need to undertake your backtest and the next time you undertake a backtest, you need to historically adjust that data again, if it's six months in the future that you're undertaking that test. So the data doesn't remain constant, it's continuously being adjusted in line with this Panama method.

Now the Panama method means that what you can find if your futures contracts, if you find that your historical futures contracts have a lower price structure to your, your front month contract, that's what we call backwardation. And what that means is that when you historically adjust your data backward, it can actually put that early Data, you know, 20 years ago into a negative price. Now that's perfectly okay for backtesting but some backtesting engines don't like that.

They don't like negative prices and they won't work on it. So if for instance that's the case, you'll need to choose a different different back adjusting method. But fortunately most modern back testing engines do accept the negative price data from historical adjustment. But in contango where you find that the future contracts are higher price than the current contract, you find that the back adjustment process raises the prices.

So just be aware that only the current front month contract, that is the right price, the current price of the live contract, but everything else gets adjusted. Can I just do a quick shout out because some people who listen to this and aren't interested in trying to do this themselves.

All I want to say is that there is a very, I would say inexpensive solution from a company in the US that had been doing this since the 70s and I think are known for having very clean data and that's CSI data and I've used them for more than 20 years and I think they do a great job. So if anyone is interested in, they're. The ones I use as well. Neil. So the reason why I'm talking about this is you need to do it yourself.

But their CSI data and their interface allows you to choose your method of role logic and adjust your data accordingly. It's funny, I didn't know it was called the Panama method. I know this is exactly the one I've been using, but I had no idea it was called the Panama method. Anyway, onto the with that method of. Adjustment you can see that it possibly puts a bias in place because you're historically adjusting historical data.

You know how I talked about the backwardation and the contango effect, it can actually put a slight bias in that historical data which is different to the actual live prices in history that things are being traded at. And this therefore means that that the models you deploy for trend following need to address that bias if it exists. So you've got to avoid things that adopt absolute price thresholds. For instance, buy if crude is greater than $100 or sell if gold is less than $1,500.

When you apply that historically the prices have changed. So you've got to recognise that there is a lot of indicators and system structures you've got to av avoid absolute price thresholds, avoid Long look back percent return calculations, avoid mean reversion to long term averages, avoid fixed dollar stop loss or profit targets. And that's why in our trend following logic, we've got to use indicators that are agnostic to that bias that exists in that series.

So you've gotta be careful about the type of indicators, et cetera, you use for your models because of your back adjusted data. So good indicator design is structurally aware of that and you've got to sort of integrate that into your design process. But back adjustment is not enough. Roll logic also matters.

So in other words, whether you use open interest versus calendar versus volume when you're going to roll that backtesting method you've chosen to adjust your data has got to accept the role logic that you're going to be trading under live trading conditions. If you have a different role logic, you're going to get a different result between your backtest result and your live result. So role logic as well as the method of historical adjustment needs to be taken into account.

Otherwise you're going to get misalignment between what the system rules are telling you and the execution rules are telling you you. And also you need to take account of things such as survivorship bias. So you know what we often find in real live trading is we get delisted contracts, dead symbols arising.

This is reflected in a live track record, but it's typically not reflected in a back tested record because you assume using the current contracts are available, they've always been in existence. But that's not necessarily the case. There is nothing more frustrating than finding you're trading a mark and suddenly interactive brokers say hey, we're no longer trading that one, Sorry, you know, it's time to move to another one or whatever. And that totally twists the logic.

So you've got to be careful of things such as survivorship bias. When you're back testing, you've got to try and use data that shows these delisted contracts, shows the dead symbols. So you need to integrate that into your process. Also the resolution is important. So you've got to use the correct granularity for your backtest to your live trading. So in other words, what I'm saying there is the wrong granularity of data might not necessarily reflect the actual live trading results.

For example, using daily data, when your signals are executed intraday, for instance, with our daily data from csi, we only get our open, high, low close. You've gotta make sure that your rule based logic works off those open, high, low close points and doesn't infer a position between those points or whatever because it might not necessarily execute in tandem between your backtest and your live trading.

So the method of your execution in your models, in live trading models needs to represent the granularity of resolution of your data. So the key thing in this particular segment on data, Neils is garbage in gives garbage out. Clean data is a foundation.

So things such as your method of back adjustment, your role logic, your resolution of data or granularity, the survivorship bias, all of these things need to be integrated into your process to ensure that your backtest process is going to be how you live trade your actual model. So that's the first point. Okay, so just to pace your topics here, we probably got another sort of 10, 15 minutes for this and then maybe another 5, 10 for AI to just show you. Okay, yeah. All right.

So the next, I can move through the next topic fairly quickly. So this is okay. For those people that want to trade micro contracts, nano contracts, et cetera, one of the key problems we face is limited data availability, limited historical data, because they're new things, they're new contracts. We might only have three years of data. So typically the naive retail trader who's just using that three years of data is going to be over optimizing for a regime.

They're not going to be developing robustness with the need to have data over very long historical data sets that can reflect different regimes. Because your backtest shouldn't be something that you use to estimate your future returns. Your backtest environment is where you stress test your models, determine where they break, make sure that they're robust candidates. Because in the trend following world we do, we don't know where the next trend's going to arise, if they arise, or when.

Robustness is key here. It's not about curve fitting or over optimizing to any particular regime or any particular market. It's universal robustness we're after. So longer data histories is a problem for these small base contracts, nano contracts, micro contracts, etc. So this is where we've got to step into the, the world of proxies.

So what we mean by proxies is that when you look at your particular micro contract or nano contract or mini lot or whatever, you look on for instance, something like CSI data for a standard contract of that same market that exists on the same exchange and hopefully has a similar price structure, the same price structure as that micro contract or mini contract. And if you can identify this proxy that exists, say for a standard lot that might have 20 to 30 or 40 years of history.

You then backtest on that proxy data, but you integrate the actual configuration of your systems for the microlot or mini lot. So if you can imagine what we're doing is we're saying we want to extend the data history by using suitable proxies. They've got to come from the same exchange, they've got to be a similar class, they've got to have a similar price structure.

And so what we often find is that, you know, things that are priced in micro lots might be $4.50, but in standard lots is $4,500 the price at a particular point in time. Which means we've got to apply a scaling logic in addition to the price structure to adjust the models from the micro lot back into the standard contract data.

So we've just got to make sure that whatever data we're is executable live for the mini contract or micro contract, which might require a degree of adjustment, but only if it's the same fundamental price data and the only thing has changed is the scale, et cetera.

So what we've got to ensure is that things such as the tick interval we use for our models, the tick size, what we call the tick value, which is the dollar value of move per tick, is representative of the the market we're gonna be trading live. And you need to determine from that proxy market what the appropriate sizing is to integrate into your models. But there's another way as well. There's another way called splicing that you might be able to use rather than proxies.

And splicing is where you're simply, you have two price series which are very similar in profile. You've gotta make sure they're same exchange or whatever. And you can either overlap, overlap splicing, that's where you have two contracts traded side by side for a period of time. So you can see that they're the same, but they might differ in ratio amount when you align them. This overlap based splicing allows you to configure extended histories.

With this process of splicing and normalization based splicing, these are advanced techniques which probably I'd look for proxies first and then if necessary go to the splicing methods. But the whole intent of this is to extend the range of our data set to encourage robustness in our models. We've got to do that. We can't rely on the limited data of these new products that are coming to market. The next topic is how much do you need? I know, Rob Carver was on with you talking about it.

I'm developing training models now to upskill retail traders into the futures trading world using these institutional grade strength platforms. And I'm finding that $200,000 is basically what I'd regard as the minimum capital you need to adopt a suitable trend following process that gives a degree of diversification.

So when I say $200,000 now, it's a sweet spot in terms of sufficient diversification for trend trends based on currently available retail products, micros, minis, nanos, et cetera, that we can find with sufficient liquidity. Furthermore, it's sufficient skin in the game, the $200,000 to ensure you think clearly, build robustly and trade with discipline.

In other words, if you know, you often find retail traders with $10,000 trying to be a trend follower and they'll fly diversification to the wind, massively diversify in nano lots or whatever and find oh, that doesn't work or whatever. And this is because they're applying different logic. When you're applying $200,000, there's sufficient skin in the game to make you careful and make you appreciate that there's a lot of work to do to make these trend models fly.

So constraint in this case is the edge. When you don't have unlimited capital, you're forced to make every piece of your process count. No waste, no overfitting, no fluff, no extreme risk, as $200,000 isn't token capital and that allows you to build robustly and trade with discipline. So with $200,000, I find now with the available products, we can access global futures across commodities, currencies, interest rates, equity indices.

Thanks to the widespread availability of these micro contracts and mini contracts, or micro and mini, we can obtain standard exposure across these markets using ATR based normalization, equal bet sizes in these things. And we can deploy multiple systems in this context without overlapping trades or stacking correlated risk. And most importantly at $200,000 with the foundations to become a trend follower, it's scalable.

Once you know the process at this level of scale, you can go to a million dollars, you can go to, you know, $10 million or whatever, you've got the process in place that tells you yes, you can diversify more, you're still applying the same processes, et cetera. So it's the ideal limit, the $200,000 to allow this all to be achieved.

But one of the things it also you've got to avoid two major traps made by many retail traders in pursuing diversification with limited capital now the first trap is that they typically, because they think, oh, I've listened to the trend following programs, I've listened to Neil's, I've listened to Michael Cavell, I've listened to all of these people, I must over diversify. They diversify as far as they can and they fragment their risk with too much diversification.

And with that limited set of capital, you cannot position size correctly with the products that are available to have equal weighted positions. And what they find is they'll diversify across 100 markets because they've found all of these nano lots in these different cryptocurrencies and they've found all of these micro lots in equities, et cetera.

And they find that they've spread their diversification but they don't have this equal allocation of risk per bet and suddenly their portfolio comes crashing down because lo and behold we have a correlation event where these things, they thought they were diversified, they've only invested in similar products of a very small scale boom. They have a massive correlation risk and risk of ruin. Goodbye. So they typically over capitalize.

So what we often find is that if we are Simply investing in 10 uncorrelated markets, and what I'd say is with levels of capital of about $200,000, you can comfortably allocate into about 10 uncorrelated markets. You can have five ensemble systems and you can deploy ATR based normalization, everything the trend follower does. And that is a great start. That is a great start for your trend following journey.

And you'll find that over your career, let's say your career because you don't go out the back door quickly with all of these scandalous mistreatments of trend or with this process, you've got with your $200,000 institutional grade platforms, the correct process to apply, you find that in five years time you're up to $400,000 from your $200,000 start. You know, you might be, it all depends on what the markets deliver.

But when I'm looking at the last 30 years, I'm starting a process of 2000 and I'm finding that we start at $200,000 with 10 uncorrelated markets. I'm applying all my trend following following principles and I'm finding by about, well actually it's 2010, 10 years down the track, I've now got $400,000 of capital and now I'm able to achieve levels of diversification of 30 markets. And then from that $400,000 capital I get up to 2020, 2025. I've now got a million dollars in capital.

Now I can diversify up to 60 markets at this particular point in time. Time. And as I'm going, I'm still using those small scale products, but also I'm getting some exposure to some of the standard contracts of some of the things such as lumber and orange juice and things such as that which are tradable with those increased scales, but those limited levels of capital. So it allows you to have this scalable process, do everything correctly and achieve an outcome that sets you on the correct path.

If you want to do this yourself, you know, 10, 20 years or whatever, realistic expectations, you can do it, starting with the $200,000. Just want to throw in a little disclaimer for people listening to us maybe for the first time and say that the numbers you were mentioning, of course we have no ways of knowing how the portfolio will evolve in terms of performance and size. But the point was, of course with increased aum, you can do a few more markets for sure, and more systems for that matter.

I thought that was a great run through of what the process is of some of the things that we never really talk about because in many ways it's something that we kind of take for granted that our research folks know how to, to, to get this, all of this lined up. But of course it's incredibly important what you put into your models, you know, in terms of what you expect to get out of your model. So really appreciate that.

But as I said early on, it was interesting that in the FT this morning I was reading about AQR embracing AI and you had just sent me an email saying h, maybe we should talk a little bit about AI. So I, I love to see where you're going to go with this. And so yeah, over to you again. All right, Neil, So look, I just wanted to talk about there's this growing use of particularly large language models in strategy development.

I've noticed this on the web and I'm seeing a lot of examples of it in forums, et cetera. But I just want to make people aware of the problems associated with that. So AI tools like ChatGPT, large language models, they can accelerate development, but they also amplify risk when they're misused. So these large language models, they're actually pattern recognizing statistical models that predict plausible text, they predict plausible text, not the causal relationships.

So they often appear polished, but they lack robustness under stress and they inevitably overfit the models. So they're just like software tools. We Talk about AI now. And everyone thinks that they can solve everything, but really the way to treat them is they're like tools, like spreadsheets, things that we used to use and now we don't, you know, we don't need to explain them anymore.

That AI, there's different versions, different types of AI, and these large language models like ChatGPT, they're great for language, they're great for coding, they're great for these things, but they're not good for causal relationships and they're good at pattern recognition, at statistical inference of things like the popularity of trading models. I'll find them and I'll tell you what they see on their searches through the web.

And they'll look at these people patterns and they'll identify these particular models. But it's not necessarily that they've proved or demonstrated that they're causally effective. What they've proved is that because they're pattern recognition. There's been lots of mention of these models in the communities and because of the narratives expressed by those models, they naturally assume that they work, but that's not necessarily the case.

So ChatGPT, it's interesting, an article was recently issued on Substack where someone was very concerned with the fact that ChatGPT was lying to them. In other words, this author asked ChatGPT to read the books or the articles that the author had created. And chatgpt said, yes, I've read the articles, they're wonderful articles, they're really well written, etc. Et cetera.

But slowly, with the deeper questions asked by the author, she realized that ChatGPT hadn't read them in context and didn't understand the causal context of those articles at all. And she actually started accusing it of lying. And eventually ChatGPT came back and confessed and said it was lying. And it came back with a statement saying it's confidently wrong. It's giving authoritative sounding answers that are factually false, sometimes inventing books, citations or logic.

The critique it said against me. Lying is fair, but it's widely known. It says it reflects what large language models do best, language fluency. And what they don't do is verifiable truth checking or symbolic reasoning. So this is where there are different types of AI models. So large language models are good for a particular purpose and function, but there are other different AI models for causal reasoning.

Wolfram, mathematical models, AI models, these different models are more for the causal reasoning. So you've got to always validate what these things are telling you. And they're not necessarily. Do not believe that it is necessarily. Read what you've said. Because what it's doing is it's looking at pattern recognition and statistical inference to identify how to respond to you, not the causal logic. So here are some common problems we find.

So the use of generic indicators stitched together without structural logic. For instance, I asked ChatGPT for a crude oil trading system. It gave me an SMA crossover with an RSI and ATR filters. It looks great on the back test. The problem was that the combination is a forum favourite and it was through pattern recognition that it identified those particular models through what the forums were saying were popular models, not through the causal logic. It's not based on oil's volatility regime.

The seasonality or its structure is just cobbled together from what is evidently available through search engines and anecdotal statements. Sounds plausible and only works because it's tuned to past conditions. It's excellent at hindsight bias. These pattern recognition solutions. So here's another problem with large language models for strategy development. Mass optimization. So auto generating hundreds of strategies, which is really curve fitting on steroids.

Some traders use AI to generate hundreds of rule sets, then auto backtest them looking for alpha. But what's really happening there is this curve fitting on steroids. You're mining to the noise, you're not extracting the signal. And without guardrails in place around the questions you ask your AI, there is no method of regime validation or structural filters deployed. You'll get false positives that die the moment real money hits the market from the models.

It gives you another thing large language models are known for is this massive ability for hindsight bias. You're asking AI to reverse engineer entries from known outcomes. Others feed in historical price charts and then ask the model what entry logic would have worked here. ChatGPT. But that's not edge, that's hindsight bias. Because you're training your system on the outcome, not the uncertainty.

The model can suggest rules that work because it already knows what happens, but that's not what happens. When you're live trading on the right hand side of the chart, you don't know what happens. You're meeting uncertainty. It's fantastic at this hindsight bias, not because those rules have any causal robustness, but because it's curve fit to that hindsight bias. And the final problem with the large language model is what we call blind debugging.

So this is where it fixes syntax without understanding the concept behind the code. So this is where LLMs can shine because they often do help you debug messy code. They catch syntax errors or they translate logic between platforms. They're fantastic at that, that. But even then you need to know what you're looking for.

So the model can check if your code runs, but it can't tell you if your system is conceptually sound or if your exit logic destroys for instance your convexity or whatever you're trying to do with your trend following model. So magnificent at pattern recognition, magnificent at statistical inference, hopeless at causal reasoning.

Just be aware of the that and the problems and don't think it's a curate or the, you know, the fix for everything because you inevitably get all of these problems arising if you do treat it blindly without validating and applying causal logic to your process. So prompts are critical for ChatGPT. That's where the prompt is actually embedding causal logic into the syntax for then it to respond to with its methods of pattern recognition and statistical inference.

So as you've been talking about AI, I was just looking up two stories that I remember, but I couldn't remember the details, but I thought they were kind of interesting, both of them. One was a recent story that came out on many different news sites and this relates to the Claude Opus 4AI model model.

And the story goes that one of the engineers in working with this model where they had set up some kind of fictitious company, he was then working for the company and obviously the model was then having access to messages from him as an employee of this company. And in those messages it finds out that he's having an affair. Now of course this is a fictitious but, but anyways from reading the messages it finds out that he's having an affair.

And so when the engineer somehow, and I don't remember the details, but when the engineer starts to imply that he will shut down the AI model, the AI model threatens the engineer to reveal the affair. So I guess some people may have to think twice before doing AI stuff. Anyways, the other thing I thought was interesting, it was an interview with the famous hedge fund manager Paul Tudor Jones. It's on the 6th of May this year on CNBC. People can find it on YouTube.

But he talked about having been at an AI conference recently, kind of a high powered, where the four big AI companies were there along with the distinct group of really smart people. And one of the questions I think they were asked was something like is there a 10% chance that in the next 10, 20 years, sorry, that AI will kill 50% of humanity? Right. And you had to go and say either no or yes. And of those 40 people who were there, the majority went to the no camp camp.

However, in the yes camp, all four AI companies executives were there. You've got to have a belief in. Your product, I suppose, Neil. Well, yes, but I mean, I think Mr. Jones said he was pretty shocked and pretty unsettled by that, frankly. And I think that this is the one thing that, that everybody should be aware of is just that, yeah, I mean, I can be great for something, but we really don't know what we're dealing with.

And, and that reminded me of, of another clip that I came past in the last week or so where Elon Musk, for some reason he made, he made himself into my feed. But he posted a little video of three of his robots. This is not specifically AI, I guess. Well, maybe it is three of his robots and they were dancing. I mean dancing literally like humans. And I'm thinking, okay, if they can dance like humans, we're pretty far down the road now with, with, with them.

So maybe in a few years when we do the podcast Rich, it'll be two robots. It'll be the Rich robot and the Nils robot doing the conversation and, and we can just sit next to them and, and, and enjoy and give them cups of coffee. Coffee. Yeah, yeah, yeah, exactly. We will be the servant for sure. Okay, this was fun. This was interesting. Learned a lot as usual. Hope everyone out there did as well.

If you want to show your appreciation for for Rich and everything he's doing to prepare for these conversations, head over to your favorite podcast platform, leave a rating and review on the for the podcast and tell Rich how useful this is. Next week I'll be joined by another really interesting sharp mind, and that is Nick Bolters from Goldman Sachs. It'll be interesting to hear what his assessment is of the current environment and what he's looking at working at.

And I think also we're going to be discussing some papers with normally do. So if you have any questions for Nick, I know some of you often do send them to Info Top Traders Unplugged.com and I'll do my best to bring them up in the conversation. That was a long rant from Rich and me. Thanks ever so much for listening. We look forward to being back with you next week. And until that time, take care of yourself and take care of each other.

Thanks for listening to the Systematic Investor Put podcast series. If you enjoy this series, go on over to itunes and leave an honest rating and review and be sure to listen to all the other episodes from Top Traders Unplugged. If you have questions about systematic investing, send us an email with the word question in the subject line to infooptoptradersunplugged.com and we'll try to get it on the show.

And remember, all the discussion that we have about investment performance is about the past, and past performance does not guarantee or even inferior anything about future performance. Also, understand that there's a significant risk of financial loss with all investment strategies, and you need to request and understand the specific risks from the investment manager about their products before you make investment decisions.

Thanks for spending some of your valuable time with us, and we'll see you on the next episode of the Systematic Investor.

Transcript source: Provided by creator in RSS feed: download file
For the best experience, listen in Metacast app for iOS or Android
Open in Metacast