[00:00:00] Mike Harris: Yeah, someone wrote on X that low interest rates are better for the working class, working people, and I said, no, it is counterintuitive, but higher interest rates are better for the working people. This is counterintuitive. Why?
[00:00:33] Adam Butler: All right. Welcome. Thank you so much for joining us today. I'm very excited to have as my guest, Mike Harris, who is the founder and principal of Price Action Lab. I think you'll discover that that is only one of many different, things that Mike has worked on over the years, and today he's going to take us on something of an intellectual odyssey that connects a variety of seemingly disparate concepts and events through history, that all seem to be converging on a rather surprising and potentially shocking to some, outcome, and Mike very graciously has prepared a deck to sort of help guide the conversation.
But Mike, before we get into your presentation, maybe just give everybody a short history of your intellectual journey. What have you worked on? What have you been part of? What major events have you witnessed, and then sort of witnessed history moving on from, over the span of your career?
[00:01:47] Mike Harris: Yeah, hi Adam. Thank you for having me to start with Yes, very briefly, you know, I studied engineering and my focus was on robotics and control. My thesis for my Masters was on observability and controllability of flexible robotic systems. And then I continued, working on my PhD, with learning algorithms, learning by mistake, for robotics, although my advisor changed that to control of large space structures, and I didn't want to do that.
So I moved to financial engineering then, and I joined Wall Street for easy money. I worked on robotics for S&P 50 corporation. Shouldn't it, nah, we did a lot of work of high speed assembly of electronics and we built controllers, expert systems, and it was the late ‘80s. It was, artificial intelligence was a hot field at the time. Many books were written and people were talking about similar things, what they are talking right now about robots, putting people out of work, and stuff like that. So, but we were stuck with expert systems, because simply, and this is one of the points of the talks, they were not data centers.
And for me, AI is data. I mean, this is where it boils down to. Then I got involved with trading, and you know my story. You know, I traded trend following currencies and I made some money with a partner. He wanted to start a fund. I didn't want to get involved with that and I quit. I traveled a bit and then I went to study philosophy of science.
My, I have three fields. I love engineering. I like to get my hands dirty. I have designed many mechanical parts in my life for robots especially, but also for general automation machines, even with the pneumatic parts we were doing to automate similar stuff, but the more precise, mechanical parts who were actually producing in Germany, because they are the best in precision, the world, the best, I think they are still the best in precision machinery.
I designed and I designed controllers, PID. I'm sure you know PID controllers, and for robotics and other things. And that's one of my, I love this field, but I couldn't stay because it was misdirected. The objective was to remove people from work, not to increase productivity. So it was like a shareholder driven mania, remove the people from work. Once I design the machine, with other people, of course, everything I say, I design this with other people and the patents I have are with other people, together, because that was a teamwork. You cannot do anything alone in these areas.
I met some excellent people, technicians and engineers and physicists. So, the mania was how to replace people, and they didn't know, pay attention to increasing productivity. So I, once I designed a machine, actually improved the machine other people have designed, and maybe we put 40,000 people out of work, I mean, that was post hoc. I didn't like that. I wish I could go back to the past and undo that machine, because actually, people could do that job. There was no increase in productivity. There was no increase in quality. They just wanted to get rid of these 40,000 people, and I'm telling you, these people were happy to do that work.
They had their families and they had increasing real earnings. Thanks for watching. And they had the markets, the home, the kids were studying, and they put them out of work. I don't even know what happened to those people. Maybe a small percentage, they were able to get new skills but what about the other people?
So anyway, that robotics would always be the, since I was a kid, I was bringing robots. It was the big love of my life and philosophy when I was in the university, up to the Masters. I took many, many courses on philosophy. Anything elective I could take or even extra course, if I had time, I could choose a philosophy course, like classicals, existentialism, pragmatism, analytic philosophy, and all these various movements.
And of course, the markets, and my partner, a friend at the time was responsible for transmitting the virus to me because I was in Wall Street as a fixed income, guy. And actually I was, you know, at Columbia, we did, we had some very good professors. They taught us how to do portfolio optimization, and I was working on that field, and then he came to me, he said, we are trading money. We need someone to back-test. And that's how I started. And this is a back…
[00:08:17] Adam Butler: Then, Michael, what language were you back-testing in?
[00:08:21] Mike Harris: We wrote our programs and we also used an early back-testing program from the Cruz brothers, System Writer Plus, because he had the data, you know, given data in the early nineties, mid-nineties, it was difficult.
[00:08:38] Adam Butler: Right. So he used to buy back-testing software that had the data as part of the software.
[00:08:43] Mike Harris: I mean, there was actually, they were selling data because the back-testing, I mean, you can write a back-testing algorithm in Excel, right? I mean, it's, especially for a trend following system is trivial. You know that you are doing our applications for, how easy it is for the, to develop a back-end now, maybe for intraday trading ,and, high frequency’s more difficult, but for the stuff we were doing, dump double envelope moving averages in a document channels, a breakout.
So you know, it goes, enough. And actually, I think we started with Basic and we moved to that software and Google Trading. Who made some good money? I didn't want to continue, I wanted to study philosophy of science, to get PhD action. So I applied to a school, I don't want to make, and I gave a presentation. I wanted to do a PhD on no space time, Relational vs Substantial, that's the state of the art philosophy. And I was, I realized then that maybe, that's my personal opinion, the objective of these schools was to misdirect inquiry, not to actually find some, solve problems.
And I got this idea and I said, you know, this is a black hole. I'm going to spend four years, they're going to tell me to direct my research to a specific carrier that is a dead end, because that's how they churn 10 new credits. One guy writes a PhD thesis and the other guy changes the color of the banana from, uh, you know, the banana problem is a problem in the philosophy, or for science, someone.
[00:11:06] Adam Butler: You want to walk us through the banana problem?
[00:11:09] Mike Harris: Well, you know, it's an obvious problem, like, someone lived all his life in a dark room and they haven't seen a banana. And when they get out, finally, after many years, you show them a green banana, you paint it. Is that real knowledge or I mean, that's the idealism versus, materialism problem. I mean, are the conceptions, the perceptions, the real, or there is, we're going to talk about this briefly, it's, there is an objective reality out there. For instance, when you dream and you dream a green banana, is that actually, that's a quale, they call it sometimes. Is that some knowledge? What is it, right, about something, so it gets complicated. We're going to talk about a more advanced problem, the knowability problem.
[00:12:26] Adam Butler: Okay, well let’s not tease them too much. Michael, you've mentioned that we're going to talk about several things and I want to make sure we actually get a chance to talk about those things.
[00:12:36] Mike Harris: Yes, we’ll do it.
[00:12:37] Adam Butler: it. Yeah. But before we do, I know that in locally that this discussion and this presentation was motivated by a short Twitter exchange that we have… you want to just provide a quick anecdote on that?
[00:12:54] Mike Harris: Exactly. Yeah, someone wrote on X that low interest rates are better for the working class, working people, and I said, no, it is counterintuitive, but higher interest rates are better for the working people. This is counterintuitive. Why? Google talk, you know, let me start with my presentation if we can.
[00:13:24] Adam Butler: Of course, yeah. So this is what kicked it all off, right? Because yeah, but before that appealed to me very intuitively But I would love to learn more about the details and you said I have been thinking about this for 30 years and you know I can't, you describe it in a tweet. And then we agreed that you would create a presentation, and walk us through it. So, you know, very grateful that you took the time to do that.
[00:13:54] Mike Harris: Yeah, but to understand AI, for people that are not familiar with the field, not that I am familiar in depth because no one, I think very few people are, I'm trying. You have to look at the historical progression of the things, and AGI is not a unique singularity. It is a series, a progression of singularities. And this is what, so I will start my presentation. Thanks for introducing my slides, From Aristotle to Large Language Models, which sometimes I call long language models, but it's, and some people do, but it's a narrow.
[00:14:45] Adam Butler: They’re both large and long, I think, in reality.
[00:14:48] Mike Harris: Yeah, yeah. Now they are very, very large, from Aristotle to extremely large language models. That's right. And I have a pic here of the Antikythera Analog Computer. Why do I have this pic here, and what is it? And most people are not familiar with it. This was fragments of a machine found by sponge divers in some Cycladic island in the Aegean. And they looked at it and they didn't know what it was. So they took it to some scientists, and thank God this escaped checking, and artifact checking and it ended up in the hands of some good people. Because this machine, please, if you are not familiar, you know, use your search engine and find more about it.
It's dated 200 years before Christ, with advanced techniques, and it was an analog computer that did the following things, calculated with high precision, the position of the sun, the moon, and the five major planets they knew up to that point, calculated the Metonic Cycle of the moon, which is every 19 years, and that it's the moon phases, calculated the how much time was left for the Olympic Games and eclipses.
So, why am I referencing this? Because this discovery falsifies the hypothesis that there is a progression of, in science, we didn't know anything about these things, and then Copernicus came, Kepler, Galileo, Newton. We found out about the celestial objects, the planets, and everything else. But for this device, to have this device 200 years before Christ, the knowledge must have been thousands of years before from other civilization, not the Greeks, not the Greeks, maybe civilizations that we don't even know. This was a collection of knowledge that some craftsmen, engineers, were able to translate to a machine.
So why do I say this? Because I'm a, I will talk later about another thing, I'm an anti-realist. And some people get confused what anti-realist means. I will explain, you know, later on, moderate. And I think we know everything. Now, you may be surprised. You will sound crazy. But I think we know everything, and that's quite a statement. That's quite a statement.
It's a strong statement, right? And why? And where is the knowledge if we know everything? I will try to explain and why AGI is a game changer. It's a game changer, but there is a price. There's a price for that. So if we go to the next one, why? Because you can't handle the truth. That's why we don't know everything.
A Few Good Men, 1992. Nice times. So now we are going to the tweet that triggered the debate, I mean the discussion. Are lower or higher rates better for the working class? There are two major policies in this world, pro-capital and pro-labor. They may have a change, but you can easily make a demarcation. In pro-capital policy, the rates are kept low. The debt is high, inflation is low. Why, because there is weak wage growth and inequality. So people cannot borrow. They don't make money. The inflation stays low. The debt is high because in pro-capital policies, the government finances speculative investments. And I will talk why.
In pro-labor policies, the government has policies of rising real wages. Let's go from the right to the left. Rising real wages make inflation high, but the debt is lower because they don't finance speculative investments, only traditional. And we will show an example. And the rates are higher because inflation is high. So this is the best.
Now, a lot of people will debate this, but there are some people, yesterday I saw on Twitter, oh, the Fed is going to cut rates and make it easier for people to buy a house. No, it will make it more difficult for people to buy a house. Because they're rich, they will get loans and buy the houses, send the prices up, and the working class will not be able to afford it, and you have seen this. You have seen this. So in, there are very few countries that have pro-labor policies.
China is moving towards a pro-labor policy because they don't want to backlash, socially. They want to avoid it, And maybe Korea had some pro-labor policies, and of course, some Latin American countries, but the inflation got out of control. The problem with pro-labor policies is you can get a bifurcation in the inflation and then rates will not do anything to stop the inflation, especially if you don't have a reserve currency. If your currency is junk, basically, no one wants it. If you issue bonds in your currency, no one wants it. So, that's it.
[00:22:58] Adam Butler: So, before we move on, I think it will be on many people's minds, whether this division between pro-capital and pro-labor is conditional or unconditional, and what I mean by that is, in a low, in a system with low debt load. So say the late 1970s, the nine, most of the 1980s, or you know, much of the 1980s until the um, what do you call it, high yield junk bond boom, there was very low debt, and so higher interest rates didn't have the same impact maybe as we might expect them to have today, right, where the economy is already extremely indebted. The private sector is indebted. The corporate sector is indebted, and the public sector is heavily indebted. Do these divisions have the same meaning in a low-debt state as they do in a high-debt state?
[00:24:15] Mike Harris: It's hard to say, because there is only one country that has a reserve currency, United States, and the policy is basically pro-capital, but employment depends on the health of the corporate sector. On the other hand, in some other countries, the government is the employer of last resort. I don't know if it makes sense what I'm saying. I'm not sure.
[00:24:52] Adam Butler: Yeah, that does make sense, absolutely. I mean, we would probably agree that in a healthy economy, the marginal unit of labor is coming, demand for labor is coming from the corporate sector and not from the public sector, exactly, right? I guess what I'm going here is that we've sort of, just maybe, we've sort of decided on a high-debt condition and in a high-debt condition, maybe lower rates are also more pro-labor, but at the margin, they are much more pro-capital than they are pro-labor, right? So, you know, in a high debt situation, low rates benefit both capital and labor, but it disproportionately benefits capital.
[00:25:47] Mike Harris: Yes, but here is the very good point. I agree with you, but here is the catch. When the debt goes to finance technologies that displace people, then you have a problem. Gotcha. Okay. Excellent. And we will see here, traditional investment. Altria Group. I hope many of you know that Altria Group, tobacco, is the highest dividend yield in the U.S. market. Eight percent. New highs. Now, they say tobacco is bad for you, but GPU’s are good for you. GPUs are good for you. NVIDIA. Tobacco smoking is bad for you, but sitting all day, eating pizza, and drinking soda, and playing on Instagram or Facebook is good for you. And we can, of course, use your data. 0. 034 dividend yield.
[00:27:02] Adam Butler: Right. And you could expand this. It doesn't need to be NVIDIA. It could be Meta. It could be Google. Yeah, yeah. Well, I don't want to put Electronic Arts. Yeah. You know. Whatever. Yes.
[00:27:14] Mike Harris: Yeah. So, here are the two charts for the competition of resources. Traditional industry, by the way, the red line up there is the unadjusted stock, with the splits. You see them, and I just put it there to see how the dividends enforce this strong uptrend. Eight per cent, I mean, it makes dividend as much as the S&P makes total return, right? Almost, right? Almost. I mean, it's, this is a traditional industry that is under, you know, smoking is bad for people. But, what is good for people? I don't know, NVIDIA, and two questions with a big, huge jump of about one astronomical unit here.
How did we get to this? That mess is transhumanism, the solution. I mean, it's a crazy link there, but I, Google will see. We will see. This is the federal debt. Of course, you know, up to about 36 trillion. And let me get my mouse over here. You see from linear, it became somewhat parabolic because it's a log chart. It means that over here, the actual debt was going linear. It's the time they try to balance the budget.
[00:29:08] Adam Butler: Yeah, under Clinton. Yeah.
[00:29:09] Mike Harris: Yeah, and then we got the dot-com bust, because the money dried up. Okay, so there was this bust. There were the new technology, the internet.
[00:29:30] Adam Butler: They needed more money but, government to build out all the networking infrastructure, the world kind of finance, the build out of the internet, and then of course, the world.
[00:29:40] Mike Harris: But Cisco and Intel are below the 2000 heights on an adjusted basis, right, still. So what is the link of the public debt to the stock market? It's here. The red line is the S&P. And the black line is the public debt. And you see the, over here that got flat. We have the bust and then they increase immediately the deficits, but then we got the Chinese imbalance. I call this due to the Chinese peg, artificial currency manipulation by the Chinese. All this money flooded into the United States, pushed the interest rate down, and created the real estate crisis. And we got this, and then they decided to go bazookas.
[00:30:52] Adam Butler: Right, so the Chinese Me Can List Machine, after they joined the World Trade Organization, created an enormous amount of profit that would either need to be reflected in an appreciating yuan or in …, or in recycling. That accumulated profit back into Treasuries, which is what they opted to do, right? They pegged the currency and they bought an enormous amount of Treasuries and built up their foreign reserves.
[00:31:25] Mike Harris: You know, some people don't get it, because they don't understand, you know, symbol or because they have no, it's not a job, but if you accumulate the U.S. reserves, you have two choices. One is to invest many investments in the U.S. assets, paper assets or even physical assets. The other is to convert them. If you convert them, you're going to kick the, push the dollar down and it's not good for your exports. So the Chinese flooded the U.S. with money and created this huge imbalance.
You see in many books, written, you know, I didn't discover it. The Chinese created this, and because they artificially pegged, I think it was around eight, they pegged the currency to the, for many years, for many years. Now by 2005, the Chinese understood they're going to create a worldwide financial crisis. And they started, appreciated the currency, but it was too late. It was too late by then.
So to keep the stock market up, you have to increase the debt. Why? Because the investments are not the Altria groups, or agricultural machinery. They are speculative. If you don't issue debt to finance these companies, the market will collapse, like in the dot-com. I mean, who have example?
Who have data? And I give you an example here of what happened in the 90s, with a 10 year average yield of 6%. CPI average CPI for the whole nine, 3.5% and Fed funds at the average, 4, 4.5%. The S&P 500 total return goes up 4, 5, 6% every year positive, except 1994. I remember that. And our parents bought houses in the 70s with 8 percent yields and you know, and interest rates, and they were able to pay them off in 10 years, even with higher interest rates. Why? Because, I said before, the pro-labor politics then, with rising real earnings. And then of course, I think you have talked about this in some of your podcasts, game, the financial repression.
[00:34:29] Adam Butler: Yeah. So yeah, I love sitting down with my parents and their friends who bought homes in the early seventies, and then bought nicer homes in the late seventies and they bought their first home at 0.75 times their income, exactly.
And their second home at one times their income. And yeah, they paid 10 to 15% interest rates for a few years until they were able to refinance much lower, but their wage growth exceeded the interest rate that they were paying. And so it seems like the interest rates were onerous, but in fact, relative to their growth and earnings, they weren't.
[00:35:11] Mike Harris: People who are not paying attention to interest rates, I remember this. I was a kid. No one spoke about it. Today, there is panic, panic in social media, whether it's going to be 25 basis points or 50. I mean, this is insanity. This is insanity. If the system is so sensitive. And, and senators, how many senators, 100. They wrote a letter to Powell to lower by 75 basis points. Otherwise, it would be the end. It's obscene. I this, it's obscene and it's pathetic. Agree with you. Thank you for agreeing because you know, I, it makes me feel good because sometimes I think I'm alone. If the strongest country ever, the future depends on the interest rate cut… Now let's go to the main theme. If you agree, you are the host.
[00:36:24] Adam Butler: Oh, I think there'll be, there'll be some points of contention here, but that's what will make this fun.
[00:36:29] Mike Harris: Okay. Why the future doesn't need us. Bill Joy, co-founder and chief scientist of Sun Microsystems, said in an article in 2000, he's still active today, he's investing in GNR genes, nano technology and robotics. And he was misunderstood. People called him an obscurantist. Obscurantist. But Bill Joy was trying to make people aware that there are technologies that can go exponentially, like what is happening now. We are in the early stages of an exponential rise in artificial intelligence due to data centers and collection of data, collection of data. There are astronomical data, there is a social data, there is a whatever atmospheric data, everything just keeps piling up, and these machines, they crunch the data every day.
And in 2000, this man could see, and he was accused. I mean, the people wrote articles that he's trying to obscure things and everything, but I agree 100 percent with him. And not only that, I go one step further, that the future does not need us, is a natural progression, and we will see later why. Bill Joy, an amazing person, in August 15, 1971, created the gold standard. In November 15, 1973, Intel produced the 4004 CPU. Does that change the world? Who needed the space race? People don't think, some people, they say, why we didn't go to the moon? The space race was based on the idea that there will be, whoever goes to space will dominate, you know, the Russians, the Chinese, the Americans.
Here, we discovered the CPU. This is the Tool of domination. This is the tool that will provide the financial and maybe other types of domination, socially, social, geopolitical ideological domination. So the space, we forgot about the space, right? Forgot. It was not necessary. We had to concentrate on semiconductors, Silicon Valley. Of course, the gold standard was an impediment to digital growth. We needed to finance this growth.
It was a new, so fiat, now the fiat is a drug to AI growth open. AI said they need $7 trillion. I hope you, oh yeah, I know, they need $7 trillion for data centers to, for GI. Well, Central Bank Digital Currency, the solution is there. In a new virtual world with AI, debt levels will be irrelevant. Whether it's 36 trillion or 73 or 200, it doesn't matter, because the digital, the Fed will be irrelevant. There will not be any Fed. It will be a Treasury, and the digital certificates in this crisis will control the consumption of capital and regulate inflation. The monetary policy would be in the currency.
Now monetary policy is how you set the interest rate and the amount of currency you print. This digital, virtual money, they combine everything in a package. Do we need to cut consumption? We limit how much of this or that people can buy? We are talking about a new socioeconomic and political model.
[00:42:19] Rodrigo Gordillo: Sorry to interrupt, but I did want to take a quick second to remind listeners that while we do absolutely love providing our audience with world class guests and weekly investment insights, we wanted to remind you that we actually do our best work outside of this podcast, and we try to do this by providing cutting edge, globally diversified, and systematic investment strategies that are designed to be broadly non-correlated to traditional equity and bond portfolios.
So we actually manage private and public funds as well as bespoke separately managed accounts for investors that seek the potential to smooth out portfolio returns in the long run. So if you do want to see that theory that we've been talking about put into practice, please do go ahead and check us out at www.investresolve.com. Now back to the podcast.
[00:43:03] Adam Butler: So, maybe just expand a little bit on how the, why the gold standard was a drag on digital growth. I think I understand that, I think reasonably well, but I think it's worth reviewing, and then, but I think it requires a little bit more fleshing out why fiat is a drag on AI growth, and how, why virtual money will fill that void.
[00:43:26] Mike Harris: Well, if you had to have gold, your currency, an amount of gold to back your currency, there was so much money you could issue. It's simple. So they cut this. I mean, it went, it didn't happen suddenly in 1971. I mean, they cut, I think they have gone down to, what, 15 percent or 20. I don't remember, of banking, like with the bank standards, the multiplier, now it's, you know, the, no reserves.
[00:44:05] Adam Butler: Oh, the reserve standards, yeah, I mean, they started out in the 20 percent range and now they're, now they're zero. I think they went to zero in the late 90s.
[00:44:15] Mike Harris: I mean, in Canada, I think was first who did this.
[00:44:19] Adam Butler: Why am I not surprised?
[00:44:23] Mike Harris: They are. So, you know, if you, if people go to the bank and they ask gold for the currency, You know, there's a limited supply.
[00:44:39] Adam Butler: Right, so it limits the supply of capital. The thesis was that in order to grow, to take advantage of the full innovation potential of the economy in this new age of digital infrastructure, we cannot be burdened by this, the growth rate of the economy, that is fixed through a fixed reserve ratio to…
[00:45:06] Mike Harris: Let's call it limited, because of course, they could write certificates on gold, but when the market sensed that more certificates on gold were written, like with the futures markets, yeah, there was not enough supply to back it. You know, the market was going crazy.
[00:45:28] Adam Butler: There was a huge squeeze.
[00:45:32] Mike Harris: So, and now to have this AGI, and as a tool of superiority, and look, this is an important technology, AGI. Let's make a parenthesis here. Imagine this system and then, you know, superintelligence, okay? Imagine this system that will be able to make a query, design a new super weapon for me. I mean, I'm saying this, and I stop here because just to give an idea, this technology, and don't pay so much attention to papers, you know, a lot of papers come out of China on large language models and new techniques. I don't pay so much attention. I think the serious technology, it's not disclosed, right? People will see the results suddenly. People will see the results suddenly.
[00:47:06] Adam Butler: Yeah, because there’s too much, the opportunity is so large for the agent organization, institution, company, et cetera, that is able to make that AGI/ASI leap first. The value accretion potential is just so huge that all of the incentives, I just want to make sure I understand that all of the incentives are such that it's very unlikely that the most important research is being published in a public domain.
[00:47:39] Mike Harris: Exactly. Like with many other fields, like with many other fields, I completely agree with you. And you know, there are two players, USA and China. ARASIA is out of the game, India is out of the game. And this is the clash. Who is going to get there first, because we don't know. We don't know. I mean, there are two schools. A school says that AGI won't find any, and another school, it's my school, that says AGI will find everything. Now, I always try to separate between, and I want to make this clear, what my feelings, and my feelings is that AI is not a good technology for people, and in reality, you see, most accounts, like on social media, because they don't like AI, they try to find reasons why it will not succeed.
[00:49:06] Adam Butler: Right. Yeah, I see that a lot.
[00:49:08] Mike Harris: Yeah, you know, sometimes I make this mistake too, because of anguish, because of desperation. But you have to separate these two. And most people out there who completely blast AI because they are afraid about the future of the human race.
[00:49:46] Adam Butler: Let me propose a frame here, just because I want to make sure that we close the loop on the gold standard of fiat. Yes, is it fair of me to analogize in this way? In the 1970s, the focus was on existential competition with Russia, and there were several fronts in that competition. One of them was technology, and economic might and power, and the lever through which the Americas sought economic power, was via innovation. They felt that the gold standard was holding them back in this existential battle against this rival Russia, they decided to break the gold standard to facilitate an acceleration in technological development.
Now, we skip ahead a little bit. AGI is, AGI, sort of, in the 70s and 80s, the big technological races were, there was an arms race on nuclear as an example, but there were other ones as well. Is it fair to say that it's an analog today between China instead of Russia and the U.S., and the battleground now is around AI instead of nuclear, but it's manifesting in a very similar arms race, and you can see that game theoretic, manifestation of that playing out in all, in the decisions that are being made, both at the policy level and at the private industry level. And this is why we need another move in terms of how we're going to finance this new round of technology and that's going to facilitate that, necessitate a move from fiat to digital currencies?
[00:51:51] Mike Harris: I think I alluded to this before, but you articulate it very nicely. I mean, fantastic. Thanks, that's exactly what is happening. And they have to press on the gas pedal. I mean, they, but they have to find a way that this increase in debt will not be inflationary, otherwise the gain is lost, right? How do CBDC versus fiat help to, well, I said before, the certificates will control consumption. Now they will tell you they have spoken about it. It's not my, I don't make up this that you will buy one, three pounds of meat, you know, a week, maximum.
[00:52:49] Adam Butler: I see. So it's, the CBDC allows for a form of control. They can control the amount that people consume at a very granular level of different types of services and goods.
[00:53:04] Mike Harris: Exactly. It's not only control of the quantity of money and the interest rate. Now we are going to granular control. And of course, some people say about social credit and things like this. I don't know. I don't know. But everything is possible because when you open the gate, there are people willing to do these things, and I didn't say what gate, but you know, but there are people willing to do these things. It's necessary, because this new arms race is about knowledge, about AI. There are so many things we know, maybe a tiny fraction of the things that of this reality, whatever it is. Whether it's mental, idealists, or it's real, materialists, we know very little. So this AGI will open new avenues, but we don't know what will emerge from this. So whoever gets there first, as you said before, will have the advantage.
It's like Columbus when he got to America first, he had the advantage. You know, he found a new continent. Whoever gets there first will have the advantage. So digital certificates would control consumption and regulate inflation. And this is, you know, without AEI, even without AEI, we will get, this stage is necessary at some point, even without AGI. Even without so here.
[00:55:24] Adam Butler: That’s very, very helpful. That that clears up a lot.
[00:55:27] Mike Harris: Yeah. I have a chart of the number of transistors, red line, and the blue is the public debt and I made it linear. So it will be scary, okay? So we see like, before QE, the debt was about what, 12 trillion dollars. And of course, you know, you can normalize this by the GDP and stuff, but I want to be scary chart, so I don't make any normalization.
[00:56:09] Adam Butler: It would be scary enough, even normalized by GDP. Yeah, yeah.
[00:56:13] Mike Harris: And you add that 23 billion, about 23 billion transistors in a CPU, and look how this after QE we are up to about 60 billion Transistors per CPU, we have more than enough, right?
[00:56:37] Adam Butler: And so, paint a causal chain between QE and this incredible acceleration in the production of transistors
[00:56:50] Mike Harris: Don't set me traps, please, Adam. I like you, you know, you're a good friend. I never talk about causal travel. I always talk about associations, okay, and even associations.
[00:57:07] Adam Butler: Why would we think that there might be a link? I think there's the reason to think, and I think you…
[00:57:12] Mike Harris: Because technology was financed. And even associations are hard enough to prove. Causal relationships you cannot prove. That's a fundamental thing in science.
[00:57:26] Adam Butler: Yeah. So I didn't mean to trap you, but I just think it's a, I'm sympathetic to this narrative, right, where, and I think you alluded to it earlier where very low interest rates incentivizes speculation, right, and yeah, and technology investments, huge investments in technology are perhaps one of the most speculative types of investments, because you don't know whether that engineering bet is going to pay off. So you need a very low cost of capital. A low cost of capital disproportionately benefits technology over other industries, because technology is most closely associated with speculative capital.
[00:58:13] Mike Harris: Exactly how, this is how I started the presentation, but we will articulate it nicer once more. And this is the association, it can be established by the fact that most of the money, and if you look at the charts, if you look at the charts, especially after QE3 is when the stocks, the stock value of these tech companies took off. Yeah. 2013, 2013 was over here, down here was a turning point for technology. Yes. QE3. Yeah. Yeah. Yeah, which was normally completely unnecessary. Oh yeah, absolutely.
[00:59:06] Adam Butler: Yeah.
[00:59:06] Mike Harris: But we keep in mind the new arms race, which is the AI race, AGI race, and you have to compete. You have to make this technology. You have to protect it, make the chips over there, because the chips, what are, is driving this new technology. And this is why the NVIDIA CEO can sign on that info.
[00:59:53] Adam Butler: Yes I recall. Yeah.
[00:59:53] Mike Harris: Because, yeah, that's the most important company right now. Because what Musk bought like a hundred thousand of these GPUs for the data center or something like that. I don't know the number, the exact number. So the association is clear. Oh, there is a small probability, just a coincidence, but no, if we look at more parameters, it's not a condition, there is an association. And of course, the Nasdaq technology index, I was saying before the Nasdaq technology index, not the Nasdaq index, the Nasdaq technology took off after QE3.
And of course COVID, you know, and then the bear market, but now new highs, new highs because of all this. And I'm asking the question, is digital a natural progression? And was the departure from analog a mistake, or a natural progression? And we need to backtrack and look at history. Do you want to do that? Yes, I definitely do, please. The first singularity, AGI is a series of singularities. The first singularity was the Aristotle AIoT of logic. You know, logic was always known to people, even thousands of years before Aristotle. What Aristotle did, and that's why he famous, he, it's like a contractor. So he took all these ideas, smart money, he sat down and look what is the minimum set of this principles I need to have a consistent logic. And he found four, well, three, because the fourth is rarely discussed. And I will tell you why. By the way, these are taken from a book I wrote on the subject, we are talking many years ago, but I never published because I was busy with the markets and data mining and I didn't care.
So whatever you see, like a peak embedded in the presentation, is from that book. It's my copyright. The law of identity implies the law of no contradiction. Something is, it's double negation. The law of excluded middle A, or not a one of the two is true. Now we have something interesting that is never discussing it, important in the subject of, for large language models. The law of rational inference from what is known to what is unknown. If this principle is not true, forget about AGI.
This principle of Aristotle who was, you know, a mix of a realist and an empiricist. No one knows what he was exactly, and he was the teacher of Alexander the Great and many other things, but he was a rich man, together with the other rich man of Plato. They were making a lot of money. A lot of money, like, I will expand on that. The law of rational inference, from what is known to what is unknown, it has been disputed, and also all the axioms, okay, in other systems of logics.
Thank you very much. For instance, the law, some, they don't adhere to the law of identity. Others, they don't consider the law of excluded middle. But this categorical logic is how the human brain works. The idea is, can we go from what is known to what is unknown. Plato, his business, what do you call, the, you know, business focus?
No, no business, adversary. Adversary, you know, he's a competitor. Plato was his competitor. Plato said, no, you cannot go from what is known to what is unknown. Because Socrates, who was his teacher, said, and it's written in Minos with the discussion, we know what we know. So inquiry is not required because we know and what we don't know, it's not, we don't know what it is, so there can be no inquiry. So accumulating new knowledge is impossible. This is the knowledge paradox. Right. Very famous. Look it up for the people, probably, you know, for the people that have it here. It's in Plato's Meno. M-E-N-O.
[01:05:50] Adam Butler: We should also make the connection here between the law of rational inference, and the fact that large language models, transformer tech, what is happening is inference, and it is called inference in every technical document as well.
[01:06:10] Mike Harris: Yeah, inference. Yeah, and rules of inference they use. Yes, so the student asked Socrates, so how do we learn? From the soul, he said. What is the soul? Never answered. He took the fee and Plato was the richest Athenian. You know, Plato was the richest. Today, if Plato, in today's terms, he's worth more than Bezos or Musk, from the money he had collected from people to teach them dialectics. So Plato said no. Aristotle said, yes, we can go from what is known to what is unknown with logic. Why? Because he was partly anti-realist. Now, anti-realist doesn't mean you deny reality. Anti-realist means that you, it's, the word is due to human creativity and what we can prove, based on logic.
That's what anti-realist means. From the other point of view, from the other hand, Plato was an idealist, and in that movement, everything is mental. And the only omniscient being is some kind of god. So, but, caveat entered, I'm not a philosopher. Okay, just, I'm trying to understand this in the context of technology. Now, many, many years after Aristotle, came Leibniz. In the meantime, the priests of knowledge, the church, they suppressed everything. They took Aristotle's logic and they used it in scholastic philosophy to prove theological syllogisms or things like that.
But it was suppressed, and knowledge inquiry stayed behind, because you can't handle the truth. I mean, the second slide, according to them. So what Leibniz said, one of the most important philosophers in the history of the world did many things, just like Aristotle. Leibniz studied the relation of mathematics, one thing he studied, and symbolic logic.
Imagine, this is what AI, AI, mathematics, in1666, nice number in his dissertation, he proposed the system by means of which all principles of reasoning could be reduced to symbolic logic and constitute the mathematics of thought. And also talked about making artificial intelligence machine to replace lawyers because he didn't like them. Amen. Yeah, I mean, and also talk about many other things. He described a complete binary system in 1679. I mean, one man. Of course, he used knowledge accumulated by Aristotelian logic. And, but his contribution was, he went from what was known, to the unknown. I mean, that was a real influence. So, about 150 years passed and people were working these ideas in their mind. Then was the transition between a very dangerous society. You could do, you could go through inquisition.
[01:10:46] Adam Butler: Yes.
[01:10:47] Mike Harris: Like many people went through inquisition. Many Jewish people, for instance, they went through inquisition in Europe because they were supposed to, and many others, you know, other religions. So you have to be careful, because they could knock on your door and take you to a cell for the rest of your life. So people took it slowly, slowly, until a genius who never studied mathematics, George Boole, in, at the age of 32, in 1847, in his work, The Mathematical Analysis of Logic, showed finally the relation of mathematics and logic and proposed the Boolean algebra which is unchanged since. Do we have time? Yes. Yeah. And you know, it's a pity this I'm asking this because this man, George Boole died because while he was going to go to give a presentation about his Boolean algebra, he got caught in a thunderstorm and his clothes were soaking wet.
But he gave the presentation, and he went home and he had a cold and his wife was a student of, what do you call, homeopathy? Yes, in homeopathy, they think the agent can be used to defeat the agent. So he, she started throwing cold water on him. And he died of pneumonia. I mean, this man had to live like 20 more years, probably AGI would have been here, you know, decades, hundred years before. Yeah, yeah. Boole, George Boole, genius.
And of course, the discovery of the transistor. Now, as soon as George Boole presented his Boolean algebra, a race started. Everybody understood that something big was coming. But, and started, they started building electromechanical computers with relays. I mean, there are many, many examples, but they, everybody was looking for that electronic element that can be used as a switch, an amplifier, to switch between zero and one. True, false and true. Zero and one. Or the other way, depends how you define it. So the race for the discovery of the transistor started. Because the electromechanical machines, they didn't have a good way, you remember, I don't know if you lived through punching machines for the code.
[01:14:11] Adam Butler: I didn't, but of course I know what you're referring to, yeah.
[01:14:14] Mike Harris: You know, it was a mess. I mean, you couldn't read, you couldn't scan documents, everything is today possible because of OCR, OCR, object recognition and character recognition, yeah, character recognition, this, you know, so the race for the discovery of the transistor. And, surprisingly, the Russians, who are first into the race with a guy named Oleg Losev. Now, Oleg Losev tried to develop the zincite crystal diode.
Should I accelerate this or you have time? Keep it on your pace. Yeah, okay, thank you. I appreciate it. Now, a diode is not exactly a transistor, but if you put two diodes in a series, you know, it's a PN, if you put another one, it's PNP. You can get a transistor, but the state changes are uncontrollable.
But of course, it was the, you don't have control with the gate, like the transistor to go from zero to one. So, but it was one of the first to go. And in the attack of Leningrad, siege of Leningrad, he died. And he was, you laugh, he died and his whole team, because the Germans were already working on the transistor, okay? The Germans were already working on the transistor. There was the guy Julius Lilienfeld, Austrian Hungarian, who migrated to the United States with a copper sulfide semiconductor.
And the Germans, Oscar, Hale, Paul, and huge, potassium bornite crystals. But of course, the winners of the Second World War got the patent. You know, the Bardeen, Shackley, he started the Silicon Valley. And then, the same year, the Germans, they tried to patent the transistor, not transistor, transistor.
[01:16:47] Adam Butler: And at this point they were, they'd settled on silicon as a substrate.
[01:16:52] Mike Harris: Yeah. Yeah. Well, they started with germanium, with the silicon. Yeah. Yeah. They started with potassium …. They went to silicon.
[01:17:04] Adam Butler: Yeah. So was that in 1948 or 1964 that they settled on silicon?
[01:17:09] Mike Harris: ‘64. It was the most transistor. They settled. Okay. Now, now it's …. It was Atala and I think from Egypt origin, and Kang from Korea. And someone was telling me once that the transistor was invented by someone, a physicist who had, you know, and that person was credible, was working for a big corporation who was working for a, he was a physicist, but he didn't work as a physicist. He had a daily. In Germany, and, you know, probably a Jewish guy.
And, but you know, he gave the invention to other people. So that was a race. As soon as the transistor was discovered, this was the set of the new life, the digital life, the development of the IC in ‘58, Jack Kilby, by the way, I mentioned all this because it's relevant to what we are talking about. Slice of Germanium. And in Texas Instruments. And it was the birth of digital life. You know, that's when it started in ‘58. And of course, we have digital computers, operating systems, a large language model. We are talking about trans-humanism, you know, masks and implanting chips. And the last thing I'm going to talk is the human extinction.
[01:18:51] Adam Butler: Mildly controversial, but yes.
[01:18:54] Mike Harris: Controversial, yes. But, you know, I'm trying to separate my feelings from what I see as the future. Yes, and I will explain why I think there will be an extinction.
Now, human life is based on carbon and oxygen. It's fragile to viruses. For example, it pollutes CO2, methane, nitrogen oxides. It wastes plastics, heavy metals, drugs, digital life. I call it life. I call it life. It's based on silicone and nitrogen. It's fragile to electromagnetic pulses. It produces CO2 nitrate, a tri-fluoride cell, hexafluoride. It produces heavy metals from the production of the semiconductors, acids, and pollutes the water. Some people have extreme numbers, as you know, we discussed about the claim about 30 percent of the greenhouse gases are due to semiconductor manufacturing. Other people claim very low numbers, but there is pollution.
And they tell us, of course, that this is the danger. Look how dangerous this guy is. I mean, it's amazing. This is, this guy is a danger to humanity. And this is good for you, right? This is dangerous, the environment, and this is the good one.
[01:20:47] Adam Butler: You know, I personally am a big fan of both, but maybe I could be…
[01:20:50] Mike Harris: I am too. I am too, but there is a fight for resources, this guy. Okay. Good question. Good, good point. This guy provides nutrition to humans.
[01:21:04] Adam Butler: Right. Just for those who are listening and not watching, we're referencing a picture of a cow.
[01:21:09] Mike Harris: And humans multiply because of that. So this guy is competing, it's part of a system that is competing for resources with this guy, right? Because this guy, people need water, and this guy needs water too, for manufacturing and this guy needs money. And this guy needs money. So there is, yeah, I prefer to work with two. Are we going to be able to have a war in harmony with these two? What do you think? Let me ask you a question.
[01:21:59] Adam Butler: Yeah, I mean, I think effectively, I've always thought about water scarcity as being energy scarcity, because with enough energy, you can create water from the oceans, right, and so really in the end, they're sort of both competing for that final unit of scarce resource, which is energy.
[01:22:23] Mike Harris: Energy, right. I don't know. I don't know. I don't know. Let's hope. So to understand AI, we first need to understand, besides the historical appropriation, we talk about the singularities. Philosophy, the epistemic theories of truth, idealism, all things are immaterial. Materialism, all things are material.
And then we have the theories of how we get knowledge. Rationalism says that reason is the source of knowledge. Empiricism, that knowledge is gained through observation. And realism, that certain phenomena exist independent of our thoughts. That's not good for AGI realism. The anti-realism says that there is no objective reality. So we, the truths are constructs of human creativity and based on logic. But both realism and anti-realism are extreme, like idealism and materialism, but all the movements in philosophy are based on idealism and materialism as foundation.
Like the people in Theists, the people at CER, they are looking for particles. They are materialism, essentially, and then they are the idea list that tells them you are not finding anything.
[01:24:16] Adam Butler: What are, so is it fair to say that that empiricism is a branch of materialism?
[01:24:21] Mike Harris: No. Okay, no. No, there are imperious empirics, like Barclay who were idealists, okay, Barclay, the British philosopher. Locke, the other British philosopher, was materialist. I mean, this mix in some very convoluted ways and what they have done is they have created tremendous confusion to humans knowledge, millions of tenure credits, millions of tenure credits. And nothing has come up, come out of this, nothing has come out of this useful.
I'm moderate anti-realist. What is moderate anti-realist? I don't care about whether all things are immaterial. I don't care about if things are material. One thing I believe is that all truths are knowable. Now, this is problematic. If you say all truths are knowable, which sounds intuitive, why a truth cannot be knowable? I mean, if you run across the truth, you can see it's knowable, whether with an instrument or with a, because it's problematic, because you will see immediately, after we look at the AI requirements, but one of the requirements of AGI is that all truths are knowable. If all truths are not knowable, it's going to have a plateau. It's going to hit a plateau. It's going to go to a point and we'll stop, and we'll just go to there and stay there.
AI needs knowledge and reasoning. Knowledge is the collection of facts about the world. Now we have the data centers and good, collecting every possible information. My cell phone now knows I'm talking to you. So, there is, there are no secrets, nothing else. Reasoning is combining knowledge and rules.
Large language models started working with solutions by analogy from existing knowledge. Can they reason to get AGI? I have a reference here. People have started training these models to do logical reasoning and in this year, this is going exponential. They are learning how to reason, and they have started with simple inference rules so that we can go from what is known to what is unknown.
Well, let's see some examples, because many people are not aware of the logical, principles of logic. There is deduction. Hans came out of that house. All the residents of that house are German. Hans is German. Period. This is 100%. Now, there are some issues with LLMs doing deductions, but I think they will be solved pretty soon. Also, there is issues with LLMs doing inductions that will go from the particular to the general, like we know Hans is German and we know Hans came out of that house and we make a rule that all the residents of that house are German because if we see many Hans, they are all German, who can say with probability, high probability, who go from a strong inference to stronger inference that all the rest of the house are German.
Of course, David Hu spoke about the problem of induction and you know, the LLMs they have, they are doing induction, but they have the problem of a, what's called problem of abstraction with induction, but I think it will be solved.
You know, there is, like in deduction, there is problem of ambiguity and context, but all these things are workable, with training and some rooms. Now there is a third not so known mode of inference, is called an abduction. Abduction is a hypothesis. We know the rule that all residents of that house are German, and we know the conclusion that Hans is German, and we like to abduct the fact that he came out of that house. Why is it important? For instance, medicine. Medicine, the doctors, they try to abduct screening and determining diseases. What disease has a patient, is an abduction about the fact, because there are so many different symptoms that can, rather the other way, there are so many different diseases, the same symptoms can be linked to. Yes. And the question comes, can we teach LLMs logical reasoning? They are doing it already. Absolutely. They are doing it already. I mean, it's a matter of time. They started with simple logical rules like the modus ponens, and now it's going to go too complicated.
It's going to go to my favorite logical rule. The destructive dilemma, you know, that was my favorite in school because in school I took logic, and my friends took tennis as an elective, and they all made beautiful friends, and I was there with some Chinese and some Indians, and in the logic class.
So can LLMs do abduction efficiently? I claim it doesn't matter. It doesn't matter. Actually, deduction, maybe induction, you know, to fit a function to data, mathematical problem, deduction, maybe no. Why? All truths will be known with a, in the area of big data. That's a fittest paradox. I said I'm an anti-realist. So Fitch's paradox is about anti-realism.
But before we go into that, let's talk about some rules of inference the LLMs are using. If it's raining, then it's cloudy. P implies Q. It's raining. Conclusion, it's cloudy. You know, you, it takes a week to train now computers to understand this. The modus tollens. P implies Q, not Q, not P. It's not cloudy. It's not cloudy, then it's not raining. Hypothetical syllogism, I have here some, this symbol here means equivalence, okay, and this symbol that there means or, transposition. Just some examples. There are many, many rules of influence, like forward chaining in the long language models. You start with data from data facts. Mm-hmm, and you going to go to a goal and label through false. It's using the more exponents, the symbol one, right? The basic rule of even, I mean, this research is going to go so fast. It's going to blow people's minds.
[01:34:15] Adam Butler: Agreed. We're already very, we're already making huge, huge leaps there. Yeah.
[01:34:20] Mike Harris: The issue, Adam, was not that in the nineties who couldn't do it, who didn't have the hardware, who had, not even Pentium 3, we had the 8086.
[01:34:36] Adam Butler: I remember I had a Tandy with that chip, yeah.
[01:34:39] Mike Harris: I mean, we had a 256 RAM. Yep. 256K RAM. I mean, you couldn't do these things. We had Prolog language. I played with Prolog and I also played with Lisp and I did some examples of translating in French symbol expressions with Lisp, but we didn't have the hardware. I mean, I was doing this in a cyber mainframe, impossible to work, right?
[01:35:19] Adam Butler: So you're starting, and we took, we chatted about this before we started recording, but I think you made that, the statement that expert systems, given enough compute, would have the same properties of intelligence as transformers or did I miss… ?
[01:35:43] Mike Harris: No, I didn't say that. Okay. I, no, I didn't say that.
[01:35:49] Adam Butler: So, you, there's no direct line from expert systems.
[01:35:53] Mike Harris: No, no. To modern AI, no, no. Because expert systems are limited by the information the experts provide, right? Yes. And sometimes the experts cannot formulate the knowledge in a suitable way to be implemented.
[01:36:18] Adam Butler: Oh, for sure, yeah. That's why I was confused because it seems like you can't span the full possibility space.
[01:36:25] Mike Harris: But the expert systems was, hey, we didn't waste all your money, boss. Here it is, a nice expert system. You can use it in the dishwasher that will adjust the temperature, so the plates won't break or vibrate, right? So we did something, you can use it commercially, but it's not AI. No, it's not AI. So forward-chaining. What is Fitch's Paradox? And I will try to go fast because we are already going for two hours. Fitch's Paradox, it's a modal logic inference that was found in a paper by logician Fitch as a comment of an anonymous referee.
It was approved in 1956, and basically, to make a long story short, it says, in modal logic, this rhombic means possible. Modal logic is about possibility and necessity in logic. But it follows the deductive rules and the rules of Hyper. It's, this is rhombic, means possible, and K means no. And this symbol, the first, it means forever, the universal qualifier.
And this symbol is the deduction. It means, it says that if all truths are knowable, then all truths are known. If all truths are knowable, then all truths are known. It has puzzled the dark circles of the philosophy of science. Dark. It has puzzled them. There has been a fierce debate. Some idealists said this is a shame, because it collapses. And modern anti-realism is what I believe in, that all truths are knowable. Into naive idealism, that we are omniscient. Idealism is about some supreme being that knows everything, and the world is immaterial. So, if all truths are knowable, then we know all the truths. All truths are known, then we are omniscient.
And they say, we give you an example, because some truths are not knowable, then, because some truths cannot be known, then all truths are not knowable. And give me an example. Oh, we don't know how many times Alexander the Great sneezed in his life. It's unknown. So if there is an unknown truth, then not all truths are knowable.
So it's an unsound truth, deduction. It's valid. It's valid. I have it here from Wikipedia. It's, but most people are not familiar with this. It's a valid deduction, but unsound, because … contradicts.
[01:40:25] Adam Butler: Well, yeah, I think it's difficult to conceive of how that can be true intuitively. Like before there were humans, what was known by humans, does it matter that it was known by humans? It is known by all life elsewhere. Does it need to be life? Does it, is it just a computational propagation of the properties of the universe like Steve Wolfram describes?
[01:41:00] Mike Harris: Yeah, maybe it is. But what I'm saying is even if it was true, in the times of Alexander the Great, we don't know. Now his cell phone knows how many times he smidges.
You see, they are selling the rings in China, that they measure everything. You put the ink, and they know everything. Your pulse, whether you sneeze, cough, everything. So in the era of big data, and that's my point, everything is known. Everything. I mean, everything would be measured and known. And we can go from, but we don't know to everything, from what we know to what we don't know.
[01:42:00] Adam Butler: So the word know is doing a lot of heavy lifting here, right? Because, for example, with enough data, we're going to be able to presumably map genotypes to phenotypes. So if you give a person, if we have a full, the full genetic code for a life form, we can extrapolate to know exactly how that life form will grow, and what it will become, and how intelligent it will be, and how tall and how beautiful, and what color, you know, that's something that we may know in the future, but really we can't ever know it. We know it by inference or by association. We may not know it causally, right? Like, we can say to some very large level of confidence that A will lead to B, but we can't know it until it manifests.
[01:43:01] Mike Harris: Yeah. Well, the idea is that with the statement that all truths are known, and there is an extended version that they are already known right now, and this cannot be refuted. I mean, no one has found a way to refute this logical proof. So I'm suspicious, yes, that we came to this work with all the knowledge, and it started B, chopped and taken away by those who wanted to use it for their own benefit, until we got to medieval years that only the priests had the knowledge and you can't handle the truth because you can't handle the truth. And then we now slowly, after a lot of effort, are going back to create this technology that will give us the knowledge back we lost.
And this is the AGI. But it comes with the price, because for me, there is a lot of obscurantism. I can see it everywhere. I can see it in philosophy of science. I can see it in education. Of course, I'm not talking about church. Church is based on obscurantism, okay? So, I'm not saying these things are not useful, the church and everything. They may be useful, but what they are founding.
So, AGI is a game changer. I think we can learn, let's not go to the extreme and say everything, but we will find out about so many things, like how to develop new propulsion technologies for space travel, how to develop new medicine, how to cure some diseases. No, it's not good for the people who benefit from this, from these diseases. We will find out many things, but the problem is for this technology to go exponential, it's a lot of money.
[01:46:10] Adam Butler: Okay. But I actually want to go back, because I think it's really important. I'm, it's coalescing for me, but it hasn't coalesced immediately. So I just want to, so I think the idea here, that you're proposing, is that there has always been a sufficient wealth of knowledge available to humanity. Much of it was either lost, you know, the burning of the library of Alexandria, but also lost to humans who did not have the right incentives to share it, propagate it, explore it, et cetera, right? So this knowledge has always been available, but it has been captured by individuals, organizations, et cetera, who here, and who were not, who were captured by perverse incentives, perverse insofar as they were incentivized to withhold information that might have otherwise benefited humanity, because, for example, and there I'm sure were other reasons, they wanted to capture the value of that information privately, rather than share it as a public good for humanity.
[01:47:43] Mike Harris: Adam, one reason the secret societies were formed, to withhold information. Right. Excuse me.
[01:47:55] Adam Butler: Yes, and you know, religious organizations, families, businesses, so I just wanted to connect some dots there in case some others may not have connected them all. And so now we're up to date.
[01:48:13] Mike Harris: I think you did a good job. I do not propose anything. Okay. I'm trying to understand what is happening in the context of the information I have, and you know, how I see these different philosophical movements, and what makes sense and epistemology, and I believe, I smell a rat. You know, I think, this medieval years, you know, this dark, how do they call them, Dark Ages, Dark Ages, yeah, I think something happened.
There was useful technology, there was useful knowledge, and something happened, but they are dead. Bits and pieces have found their way into other things. And I think AGI may be able to uncover them and link them together because a human mind cannot do it.
[01:49:24] Adam Butler: I agree. I mean, one of the things that for me, this line of reasoning, or let's say this exploration, this curiosity leads to the fact that so much of the world's knowledge resources are in private hands. And, you know, one of the things that I have expressed curiosity about, maybe frustration at times, is the fact that if this private knowledge was released and allowed to be used for training of these large intelligences, that our ability to bring forth solutions to many of the causes of human and just general life-form misery could be alleviated very rapidly, and that the incentives are misaligned so that we don't have an opportunity to do that.
[01:50:40] Mike Harris: I propose you and I take a trip to the Vatican.
[01:50:44] Adam Butler: Right, yeah, agreed. So much knowledge was chopped off, yeah, because it didn't align with their objectives.
[01:50:57] Mike Harris: Yeah, well, it's in the libraries. So many books. So, I'm not saying they knew everything in the past. But there was so much knowledge that was lost. Like the, how I started the conversation with the Antikythera machine. Yes. It took like 16, maybe 2000 years, Kepler to find the laws of the movements, you know, how much time the planets cover and everything. I mean, 2000 years, the burning of the Alexandria library took away from the knowledge of people.
Only that area. Imagine what else happened. But AGI is a game changer, but at a high price. I think in the, on the scale, this is personal, you know, how I see it. Estimate, maybe the time is years here. I don't know. We are near the exponential where the exponential starts in the linear scale, on the linear scale. We are someplace here, 15 and then all 13. AGI, maybe in five years, the way it's going. Yeah, it's going to be there, because they're going, they will be able to put together the data centers and the inferences in the models and I wish, I wish, I wish I were 25 years old working on this, but there are now more like my nephew, younger people, they're studying computer science is 20, you know, and, and they are ready to work on this. They have the energy and everything to work on this, and when he programs, I can't even see his hands. It's got in Python, it goes so fast. I mean, I say, what is this? And he's laughing.
[01:53:30] Adam Butler: Yeah.
[01:53:31] Mike Harris: And probably, you know from, and after AGI, whatever it takes, 5, 10 years, 20 years, there will be a wind of opportunity for humanity to take advantage of this technology, to avoid extinction, or as Musk says, become interplanetary to avoid extinction, because there will be new technologies, this powerful brain, powerful brain will be able to develop new propulsion technologies. Because to become interplanetary, this is the mistake of mass. We cannot do it with rockets, yes, and solid or liquid fuel. I mean, it's impossible. 95 percent of the payload of a rocket is fuel to leave Earth. We cannot do it with that.
[01:54:43] Adam Butler: No, we need to do it with gravity or with something else.
[01:54:46] Mike Harris: Something that…
[01:54:49] Adam Butler: … bends space/time.
[01:54:50] Mike Harris: Yeah. Who may not even know what.
[01:54:53] Adam Butler: Yeah.
[01:54:54] Mike Harris: And eventually, even if humans become trans-humans, as Musk is doing with implants and everything, the limit, you see, I made this mistake when I wrote the book 25 years ago, maybe it was good I didn't publish it, because I thought, like Musk thinks, trans-humanism is the solution to avoid human extinction. No, trans-humanism will accelerate human extinction. Why? Because first you replace the hands, the eyes, the organs, and what is the left? The last skin, the last thing, a better skin that will be like a mix of metal, malleable and everything.
And then there will be nothing left human, right, and humans what? They will return to the jungles. Initially, they would fight with EMP weapons until they are defeated, and then they would become like animals. So trans-humanism will not, maybe in the beginning, in my view, unfortunately, so we can wrap up this.
[01:56:21] Adam Butler: But it the same kind of problem as the grandfather, who gives the great grandfather, who gives his son an axe. And the son replaces the handle and then the, his son replaces the axe head, and then he gives the hammer to his son or the axe to his son and it, the question is, well, is it the same axe, right?
[01:56:52] Mike Harris: Well, when, Aristotle could say he has the soul of an axe.
[01:56:58] Adam Butler: Right. Can we say the same thing about trans-humans?
[01:57:03] Mike Harris: Yeah, it has the soul of it, but what is soul anyway? You know, the idea is that these beings will be a thousand times more powerful and capable than humans, right, like you described the evolution of the tools.
I mean, the practical issue, the idea would be the same. In an empiricist sense, it will be the secondary quality. The primary quality is how you look, and the secondary quality is how the materials and stuff like this, where you came from and, but it won't matter because eventually it will be 100 percent machine. Yes. I mean, humans are machines too, but they are electrochemical. Yes.
[01:58:17] Adam Butler: So, it will no longer be propagating from birth through inception. We're going to be creating new humans mechanically instead of…
[01:58:32] Mike Harris: Same thing, but same thing in my book, I wrote that, I mean, the method of reproduction of a virus, but it's different, but it will be reproduction still, so it will be, these machines will be very powerful. They are, already are very powerful. I mean, they don't have, there is still no link between external perception and AI capabilities. Maybe they are slowing down the technology, maybe, but as soon as this is done and the chips and the mechanical hearts, you know, exist already and mechanical organs and everything. Mechanical eyes are the cameras, and the prostheses, the arms and everything. So, I don't see how, that's why I say the present form of humans will not be part of them, the unlimited possibilities, but the present form of humans, of course, there is the alternative to return to analog life, you know, this rotary phone?
[01:59:51] Adam Butler: Yes.
Analog Life
[01:59:52] Mike Harris: I found one in the, I kept for a deck, and I showed it to my nephew and he said, what is this? I said, can you dial a number? He said, leave me alone now. I mean, analog life, the only life compatible with human beings is the analog life. You know, copper wires and telephone, rotary telephone, and vacuum tubes.
[02:00:30] Adam Butler: There's a reason why Frank Herbert knew that he needed, as a literary device, a Butlerian jihad, which disallowed intelligent machines. Because, of course, there could be no realistic or conceivable human future out to that point in time that looks anything like what he was proposing, we had intelligent machines.
[02:01:05] Mike Harris: Well, you know, this attempt to analog life can be forced by an extreme event, right, but even if it happens again, humans begin to try to find, to build that machine that will create, they will find, they will know everything, right? And this is AGI. So it's a recycle. And this is the binary numbers game.
That's why going live needs discover the binary, the binary number names. There is this anecdote that he was frightened and he stayed home for a week. He wouldn't go out because, and they ask him why. He said, I discovered the secret of the creation. And, of course, also, Descartes, they are German, Leibniz, German, Descartes, continental philosopher, also from France, they are underrated, understudied, because the emphasis is on the British School, Newton, Locke, Berkeley. Although Berkeley was almost an enemy of Newton and destroyed his infinitesimals.
Excuse me, it's, so those are underrated philosophers. I mean, just to give you an example, Descartes was the first philosopher. Descartes talked about the word being a matrix, that time matrix.
[02:02:46] Adam Butler: Right, like a simulation type of thing.
[02:02:49] Mike Harris: Yes, simulation. He said for the, the only way for the, this world to exist, he presents a logical argument. Inference is only if God recreates the word at every instance, destroys it, and then it recreates it. So he essentially described digital technology, metrics, everything, everything.
But obscure, okay? Obscure at this moment. People are not taught this in schools. They are taught that there is a material world with an absolute reality, and everything else. So this is the idea to summarize before we, if you have any questions or anything else. I have many questions about my presentation.
By the way, to summarize, I think we are close to getting close to AGI, by virtue of the fact that they started training LLMs with inference rules, and pretty soon these models will be linked with access to external reality, robots. And they will have all the knowledge to maybe recreate themselves. Yes. The knowledge is already built there. They know how to make chips. They know how to make mechanical parts. They know everything. They can access the information immediately in seconds, split seconds, microseconds.
[02:04:41] Adam Butler: Yeah. I think that's what many people miss about robotics is, you know, in concert with AI. Robots are what allows AIs to go out and experience the world, and learn from the world, and build a world model through contact with reality.
[02:05:03] Mike Harris: Right?
[02:05:03] Adam Butler: Exactly. Yeah.
[02:05:04] Mike Harris: And this is also how conscious develops, because the most probable theory for conscious is, people think they are conscious because they think other people are conscious. So of course the conscious machines may have a different form and sentience, and things like that. It will be different form. You know, people make the mistake to try to equate completely, the humans in the machines. But it will be a different war, and as Bill Joyce said, the future does not need you. We are not going to be part of it.
[02:06:05] Adam Butler: And, well, I mean, as you say, there is, I've got an explosion of questions. I know you have, you said you've got questions about your own presentation. Well, it's that curiosity, I'm sure that motivates a person to put together this kind of presentation. No one paid you to put this together. You constructed it from your own curiosity in an effort to put this knowledge out into the world for which I am very grateful, and sadly, we're going to have to do any follow up in another session, I think, because you and I both probably have to get onto other things.
I did want to thank you for putting all the effort into this, for sharing, for being so open minded, and, I'll just say that clearly financial markets, this is just one more externality that has been imposed on us by financial markets by enticing you to spend your time figuring out how to build trading strategies, instead of publishing a book on these topics 20 years ago, or whenever that might have been on your mind. So I'd love to circle back to this. I did want to ask you, besides this, what else are you working on these days?
[02:07:36] Mike Harris: Apart from markets, of course. Yes. Yes. Well, during COVID, I worked on a novel. It's about, the theme is about my hero, who accidentally takes a trip to Mars. Accidentally, I mean, something happened, and his name is Noel, which is an anagram of Elon. So, during the trip with some other people, they discussed things about trans-humanism, philosophy, economics, the things we were talking about here. And they go to Mars, and I'm talking about chance, because getting back to Earth was a streak of, winnings, instead of one losing a new gun. There are always problems, and I'm trying to explain the problems that interplanetary travel has, and I'm looking for a literary agent.
I haven't done much on this because I'm busy with other things, and I'm designing a website where I will be expressing some ideas about AGI and I want to focus on the window of opportunity, excellent, because for me, that's what matters right now, right? The point of opportunity.
What our kids, our nephews will do 10 years from now, because we could be, I would be retired. You are still young, be active, but we won't be active players in the technology development, what these people can do to make the human condition better. You know, I'm thinking of doing this website apart from the markets.
[02:09:53] Adam Butler: Well, a guy's got to make a living while he's, while he's….
[02:09:56] Mike Harris: Yeah, markets is where the money is. You know, who's going to pay you for, to talk about modus ponens? No one. I mean.
[02:10:09] Adam Butler: Well, I would certainly love to be part of that journey to whatever extent you're interested in sharing. I would love to read your novel if you'd like some, to discuss it, or any comments on it. I would be very delighted to support your, your, if you have five, two minutes.
[02:10:26] Mike Harris: Writing a novel is very hard, because they say violent departure from scientific writing to novel writing, you cannot use certain words, you cannot use passive voice, you cannot use so many adverbs, … wise type of stuff. So for me, but who helped me? Guess who helped me? The language model. Exactly! AI helped me. AI helped me. Without AI, I wouldn't be able to do it. Yes. You know, scan my text and the AI told me, hey, look here, dummy. You go there and take that adverb out because you don't need it. Yep. Or take this, that, those out because you don't need them. Hey, here you have …, you have passive voice here.
[02:11:28] Adam Butler: Yep. I mean, yeah, it's a phenomenal editor, great.
[02:11:33] Mike Harris: And I started with 90k words, 90,000 words, and the AI got it down to 65k. Oh, brilliant. Imagine, because it too all day. You don't want in novels, you don't want verbose and an extra, you know, you want to have the conversation, just what you want to say lean and mean, right to the point. You don't want that. You don't want to know narratives there.
[02:12:12] Adam Butler: Yeah, side quests and distractions from the primary thread.
[02:12:17] Mike Harris: Yeah, yeah, from the primary objective of the discussion. And it was a challenge, but initially I had time due to the lockdowns and things like that.
[02:12:31] Adam Butler: Yeah. Right. Well, that's exciting. I can't wait to see how that evolves for you, and I'd love to be part of your journey. Sadly, I do have to go. Yeah. And I'm sure you do too. Again, thank you so much for this. This has been extraordinary and I look forward to sharing it. And I look forward to circling back in the near future and addressing some of the many questions that spin out of this very naturally.
[02:12:58] Mike Harris: Thank you for having me. It's always a pleasure to talk to you. You know, you are one of my top three favorite ones in the markets. I think you are, if probably the top one in the markets. And I have learned a lot from you from your quantitative work.
[02:13:18] Adam Butler: Thank you very much. That's very kind and the feeling is mutual, and I look forward to doing it again soon. Thank you. Thank you. Thank you.
[02:13:29] Rodrigo Gordillo: Sorry to interrupt, but I did want to take a quick second to remind our listeners that the team works really hard on these podcasts. We spend a lot of hours trying to get the right guests and we do a lot of prep work to make sure that we're asking the right questions. So if you do have a second, just do hit that Subscribe button, hit that Like button, and Share with friends, if you find what we're doing useful.