OK, so I'm a philosopher working originally in England, focusing on European philosophy, particularly in the German language initially. But then I I focused on certain aspects of philosophy which relate to data and to what we call ontologies, which are attempts to describe data in consistent ways to make computers work Better Together. And as a result of that work, I gradually moved over into areas of computer science and data technology, always bringing my
philosophical interests to bear. And now I have quite an important role in ontology work in the US government, the Defence Department, the Department of Homeland Security. I'm also part of a team which is trying to bring about ontological or data consistency in the Five Eyes, which are the five English language intelligence communities, and I
guess that's enough. So yeah, I'm by training physician by chemist and mathematician, and I've been using AI theoretically and practically to make a living since 1998. So I've always been in the AI field and mostly I've been using it in life sciences and medicine since I'm physician, but I've also used it for the insurance industry and other domains. So I have quite a good cross industry understanding of AI usage and it's it's
possibilities and limitations. I I've always been making a living of it. And when Barry and I met 15 years ago in the context of standardisation of medicinal data exchange, a medical data exchange, and then we started collaborating scientifically. And 10 years ago we started working on AI together. And now the second edition of our book that you described as somewhat contentious has is just being published these days, which I think is also reason why
we're talking. And it shows that that though the book is is a bit debated, it is a success because because books of this nature usually don't get tech editions. It's a very complicated, very scientific, very technical book that requires a lot of fine knowledge. And so it's not a mass compatible book, but I think it's very important that we have done it and and I'm glad that we got the opportunity to make a second decision. Just for clarity, when we talk about artificial intelligence,
what are we talking about? So we are talking about mathematical algorithms that are based on regularities that can be found in data and these irregularities can be exploited with various ways. So there are explicit and implicit ways to exploiting these regularities and that has two advantages. First of all, you can find new regular patterns like when you use this to look at galaxies, you can find certain aspects of galaxies that you cannot see
with the bare eye, right? Or star systems, you can use it such such such algorithms to analyse them. You can also use them to analyse chemical molecule with the similarity of chemical molecules. Many, many applications where it's used like a telescope is used to, to enhance the human eye, right? It's a tool to find regularities that human eye can't see because
there are too many data points. The second application is automation, when you basically can exploit the regularities to automate certain aspects of human behaviour. And these, these are the, this is, this is what there are really is. It's a mathematical applied mathematics leveraging regularities that can be found in data, and it has not evolved very much, but it has nothing to do with creating something that is akin to human or animal
intelligence. I think it it's worth pointing out that this is a new kind of mathematics. So originally mathematics was performed by human beings using a paper and pencil, and they would write down theorems and try and prove them. And what we have now is mathematical algorithms which are created by feeding huge amounts of data into a computer. So we create algorithms which we do not understand. The algorithms are incredibly
long polynomial functions. When we when we issue a prompt into a large language model, the large language model will calculate the value of that function for that prompt and give an output. That's just just a mathematical process, but the mathematics has been created by pushing all kinds of data in large amounts into the machine. So so but importantly the machine cannot do this. Can create the parameters of this equation by itself.
But what there is, is there is a optimization function that basically has many meta parameters and parameters that are used to then create the final parameterization of the model. And all this is created by humans and especially the algorithms to optimise the way that the data are treated to then finally come up with the solution. And it's always the following
principle. Imagine you have millions of emails and then you have for each of the email the information whether that's a spam email or not. And then the algorithm is basically used to recreate the relationship between the email content and the indication whether it's spam or not. And this way you create a spam philtre and, and the, the algorithm said that that I needed to do this. They require a lot of memory of the computer, a lot of data and a lot of computational power.
So the idea for this algorithm was formulated around 1800 by by Gauss and Legendre and also Polish mathematician whom I whose name I keep forgetting about whom we cite in the book. And then it took 200 years for the machines to to evolve so that these algorithms can be really computed in a high end manner that creates a model that we have today.
So that's a big difference. And the big step that happened that we now have a kind of implicit mathematics that arises from data under the supervision of humans who basically create all the mathematical constraints that lead to the parameterization. When you say machines will never rule the world, what exactly do you mean? So the rulers have to have certain kinds of intelligence.
They have to know how to control people, how to make things happen, but they also have to want to make things happen. They, they have they, they're very ambitious. They have desires to control and so forth. And these wants philtre down to all the people who work for them. They want to do certain things, they want to impress their bosses and so forth. Now the big difference between a human being and a mathematical algorithm is that human beings can want things.
They have a will they, they, they intend things. They push themselves through in order to bring about things mathematical algorithms. They're not the kinds of things that can want. They don't have desires it it doesn't make sense. They, they, they get inputs and they create outputs through a mathematical process. Now it's a very complicated mathematical process and they can be processed to emulate wanting, for instance, within
the chess playing computer. The chess playing computer wants to win, but only in a very weak sense that it's learned to emulate the sorts of patterns which human beings use when they want to win, but they don't want anything. Again, it's it's the human which tells the computer by switching it on and programming it, you're going to follow these moves because then you'll win. And so it's always a matter of
emulating the human will. And they can do it for formal domains, closed domains like chess, very well. But when it comes to controlling a battalion in an army or anything which has even any degree of complexity, then the computer will. Fail so so that basically, to summarise it or put inside different words, we cannot make the computer have an ego. You need an ego and a will for power, right? Power means that you can get more realise more possibilities than others in in a world of
laws. That's the definition by David by by Thomas Hobbes and, and basically the and to achieve this, you need an ego, an ego that is a centre of your acts and that lets you plan, conceive and scheme and do all of this that so that you can accept power. So we cannot give this. We cannot give consciousness to machines, we cannot give away to machines and we cannot give intelligence to machines. But all three are needed so that
machines can rule. So, so the point is, machines will not be an ethical agent, but machines can be used to optimise rule. Since IBM introduced the pound card system in the 1930s and the Nazi started using it for the concentration camps. So so machines can be used to to enhance rule and to make it more effective, but they are not acting themselves and that's a big difference that has to be made. But OK, so a counter argument might be but can can machines not rule accidentally?
So the ruling accidentally doesn't make sense. But what the machine could do is to by accident interfere in some rule based set of processes. So maybe for instance you have a rule based set of processes which is initiated by a human being and the machine just fails to work at a crucial point and so that that complex chain of commands breaks. Nuclear war example when the machine misinterprets something and then nuclear war gets started right?
We have had these situations I think 2 times in the 1980s where human actors had to stop the, the, the such a, such a wrong process, which came from the technical mistake of the machine. And that that happens all the time. Now during the Ukraine war, both parties are using heavily using AI and the AI is creating false
signals. And then the humans have to interpret this and to stop and to not misreact, you know, and, and so this can happen, but that doesn't mean the machine rule it. It just means that the tooling is not is not doing what it's supposed to do. Where does this mass fear come from? Elon Musk has constantly posited the idea that that AI is an
existential threat. So historically there has been a movement since the 1990s that there will be the singularity and this singularity movement says that it's a point at which the machines will overtake human intelligence, which we suppose that they have also consciousness at least right. And this this this has been I think postmodern idea that is now 3035 years old and it has it has gained a lot of importance in circles in which Elon Musk is basically by which Elon Musk is
influenced. So there's Nick Bostrom was one of the leading figures of the singularity movement records while was in the second wave of a real genius engineer who is also one of the inventors of this idea. And then there are Curtis Javan and De Glande who are theoreticians of the Dark Enlightenment, a theory adhered by Elon Musk and Peter Thiel. And and so they are now very, very culturally influential. It's all pops to this transhumanism movement that we are now facing faced with.
So that the there are various components here. 1 component is hype. So Elon and his friends, they want to exaggerate the powers and the importance of AI because they need a lot of money in order to build their AI machines, which are incredibly costly in in machines and energy and so forth. So they need money. Also they want money in order to maintain their status and for all the other reasons and even because they want to do good
things with the money, I assume. But now the, the more more important feature is they want to live forever. So they, they really do believe in the idea of digital immortality. And that would mean that you can build a computer which can simulate you. And then we, we would programme the computer to keep you alive in this very weakened sense, since you're not actually a body
anymore. You're just a pattern of data to keep you alive forever without any diseases and with, with wonderfully attractive partners and all the other good things that go with immortal digital immortality. They believe in all of that, as as a result, well, they believe in all of that, then they must believe that it's possible. Which means that they must believe that computers can think in ways which exceed by far the capabilities of human.
Beings so so this hype has the function that that's the ideological and and almost religious part of plural religious part. But there's also a really economic part of this which is which is which I want to highlight once more. I think that that the fear of AI taking over is driving regulation that that creates barriers of market entry for newcomers to the field. And when you have a monopoly oligopoly, you always want the
state to regulate. And that has also been the case even the Middle Ages for, for weapons manufacturing, right? That weapons manufacturers wanted to prevent others from entering the market so that they try to try to get the, the, the state to make laws that guarantee the monopoly. That's the same here. So the, all the barriers of, of that are now erected by regulators help the monopolies. And so the hype tip is used to erect those barriers, right?
So this is very classical. That's the, the rational part from an economic perspective that that has to be understood. And and so the hype is created like the COVID hype was created because it's a business. So it falls under that category of an invisible boogeyman. Yeah, a a nice bogeyman, attractive bogeyman. Who'll make you live? Forever, like Zhaskov or climate change, right? These are also boogeyman Dr businesses.
Yeah. And of course, because it's invisible, you as a layman can't really fight back. So you kind of have to trust the the the authorities or the the powerful actors. Yeah, this is a recipe that the Great and Grand Inquisitor and Dostoevsky's Brothers Karamazov describes, right? If you rate Fear Spectre that can that that that a normal person cannot, you know, question, then it's a very effective way of ruling.
Yeah. That's why I think that's why the book is also controversial because it puts into question a huge fear hype of our time, which is the which is the the domination of humankind by AI. And also another 1A weaker one that is even more scary to most people is the replacement of all normal drops. Yuval Harari, Klaus Schwab, all these guys all the time say, oh, also even Trump, Putin, all the leaders of the northern hemisphere say that AI is going to replace hundreds of millions of drops.
That's absolutely wrong. The, the, the rationalisation potential of, of AI is not rather moderate and we can go into details why that's the case, but that's another hype that is, that is driving a lot of politics. Is, is this fear of AI replacing human workers? And, and interestingly, also all the, you know, alternative media, everybody buys into this. Also the critics of, of, of this oligarchy, oligarchy structures, they believe this, right? And they also, Oh yes, it's so dangerous.
So we will not have drops anymore. And, and, and the tariffs, the, the, you know, once the tariffs kick in and America starts to reindustrialize, all the reindustrialization will be done with robots.
It's absolute nonsense. But but, but you can read so, so the the, the the boss of, of BlackRock just said it yesterday on Twitter. He said that that the new jobs created under the tariff regime will all be taken by robots, which is complete not and but but you see that this is also very important part of this narrative. What? Is intelligence. So shall I have a go at that? So the the most important parts
are as follows. So an intelligent being can react spontaneously to new kinds of phenomena. That's the first thing. And they can do do this without being trained. And so already computers can't be intelligent. And this definition, it's not invented by Yorbsten and myself, it's the standard psychological
definition for 100 years. So it's spontaneous behaviour which has not been trained, which brings about a a relative improvement in the situation of the behaving subject for on the basis which is also novel, that seems the the most important part. So you encounter a situation of a type which you've never encountered before. Because you're intelligent, you can react to that situation in a way which is appropriate to your well being, even though you've
not been trained. Machines can't do that because they rest on training. That training always gives them the capacity to respond to situations for which they have been trained, but they can't respond to situations for which they have not been trained. We can do that. Human beings can do that because they're intelligent.
And so, so just to give one example, one simple example, if you, if you learn, if you get the driver's licence to learn to drive, you're not going to go through all possible situations that can arise while driving, but when, but when you start then driving on your own without your teacher, when you have the licence, you get into such situations.
And while of course accidents happens, more accidents happens to beginners, many, many beginners can avoid accidents even at the beginning of the driving experience because they're intelligent. Machines can't do that, right? So machines can only react to situations they've been, they've been configured. So they always talk about training, but in reality it's a configuration they have been configured either implicitly or explicitly to react to.
And when their situation arises for which they have not been configured, there's a very high likelihood that they don't react in the right way. And that's why all the self driving cars that that are, for example, managed by way more in, in California are actually drones which have pilots. And each car has its own pilot. And because the likelihood of mistakes is so high that you cannot have one pilot for five cars, you need one pilot for one car.
If you just look around you, everything is controlled by machines. Is that not true? I mean, you'll, you'll get up in the morning and your phone starts telling you when you need to leave and and when you need to eat and all sorts of things now. So you told your phone that you wanted to wake you up the next day. The phone isn't acting intelligently. It's doing what it's told. Computers are are, are quite good at doing what they're told. And so you get, you gave that one example.
I don't know if you maybe because you're sitting there with your headphones 9 hours out of 10, you feel as if you are overwhelmed by machines, but actually you are pressing all the buttons, you're making all the decisions. The machines are providing tools and tool, they're, they're no more oppressive tools than any other kinds of tools, unless we wake them oppressive. So somebody may take over your headphones, but it would be a human being who does that.
It wouldn't be a machine. So, so there is a book by Classic dystopia by Samyatin called We or Us. I don't know the English. It's a Russian fiction book of the 20s. It's it's actually a precursor to 1984 and Brave New World and both Huxley and and Orville knew it and it's I just read it. It's worth reading because it's a first really modern dystopian novel. And in this novel, everything is controlled by collective, small collective that controls anti society.
And they're using machines to control humans, but they are programming the machines. And so, so when you imagine now that we are moving to a technocratic post democratic age, you can use machines to control humans. So you could for example centrally. Programme, the wake up time century programme, the advice, how much what to eat and so on. So, so as a tool of power, digitisation is very, very, very dangerous and can have very big
consequences. Because imagine the Soviet Union operating not with paper and and and mail as they did, but operating and telephone and telefax. They had this, but operating with a computer compute completely digital infrastructure. So but the key point to make is that the digitisation is the problem here, not AI, right? AI provides a tiny proportion of the control capabilities of such a digital net, digital as as some dissidents called control net, right?
But but most of it is just the infrastructure and humans running the infrastructure. It's the ability to to to local to, to locate everyone via this device every time, even to listen into their conversations in emails. But all of this has to be done by humans, because machines don't understand what we write and say, nor the movement patterns we do. They can only aggregate them, but they can't understand them. So. So the real threat to freedom is digitization and not AI.
And many people don't understand this because they, they, they confound digitization at AI. But AI is something very special. It's a very tiny part of what is done with our data. Most of what is done with our data is that they get stored in big databases and that then they are searched and exploited to do things. But but AI is only tiny part in, in, in war, AI is actually more important than in civil law. And that, that has the following
reason. Now there are millions of sensors deployed in a battle, in a, in a, in an armed conflict, like there's from satellites down to sensors buried in the soil. There are many different types of sensors that can, can measure the whole electromagnetic spectrum, but also vibrations, not sounds. It's, it's a very, very complicated array of technology that is now gathering signals and then has to turn this raw data into signals and patterns that can be interpreted by human
beings. So that in the, in the command and control centre of an army, all these data are, are then gathered and looked at to decide what to do next. And so, so for warfare, especially for reconnaissance, AI is completely indispensable and it's the key tool now to do, to do warfare. Do do the drones attack on, on
their own? No, but, but the defensive systems that we have now, especially the Russian S 400, S 600 series and, and but also the Iron Dome, though the Iron Dome is a bit outdated technologically, these defence systems are totally AI driven, right? So no human can actually replace what an S 400 system can do. So, So what we have is reconnaissance and defence systems are already very much depend on AI. Attack systems not so much for for complicated reasons I could
go into if you want. So the reason why attack systems are not so prone to be used used with AI is that unless you, so of course you can like cruise missiles, you can define a target, then the cruise missile will also enormously hit the target. But that's good old fashioned AI mostly. There's also stochastic AI built into the cruise missiles because they have to analyse obstacles and so on and fly around obstacles which are not, may not be on their map and so on. But, but basically it's a
deterministic process. However, autonomous attack would mean that you could have drones that fly as a group to somewhere and decide on their own whom to attack and when to attack. And this is a very, very complicated cognitive task that we cannot put into AI. And therefore in the on the attack side, we don't have a lot of AI that is that is making the missions. Can it not get to a point where it can simulate self made decisions?
So our argument is that there's a saturation of what is possible with with mathematical algorithms, and we are getting close to the saturation now. And the saturation comes from two sources. One level of situation is that that partial differential equation is a classical tool for in physics, for explicit mathematics are are have only limited scope, limited number of variables, limited scope to what
they can be applied. They're usually applied to to artificial systems, to experimental systems that are set up in a way that they can yield such equations and and that's how physics works. And and now if you go to the implicit, and this is a core argument of our book, they're trying to make it as possible. The implicit if you, if you assume that by by by giving the the machine more data, you can get it better and better, exponentially better, even as
you said, that is wrong. And that has to do with the following. If the data contain regularities, you obtain saturation of the model. That means after giving it enough data, the the model understands the pattern all gets a configuration that mimics this pattern. Now it can be deployed and
working. So this is a chess and go even in non interactive poker, all these closed world situations where you have clear patterns, the machines, the machine gets its pattern by the configuration and then it becomes better than the human. But in open world situation you have what we call an anagotic systems. Anagotic systems create patterns that are not repetitive and that they have they have no regularity.
So when you give more data to the machine, it cannot learn, it cannot saturate it because there's no pattern to pick up. But the machine always can only get to pattern. And so because human behaviour is so completely complex and erratic, it's non ergotic. This conversation is non ergotic. We couldn't foresee what what we would be talking about, though we are talking about the general topic. And this is the same in the
family. If you deal with your child, you want to take it outside, but the child doesn't want to. You didn't expect this then your wife says, but but come on, let's then do something inside. I mean, you never can how human detection and that same is true in warfare. You read the clouds of its book on warfare. The chapter on friction, one of the best texts on warfare. It's Chapter 7 or 8 is relatively early in the book
where he describes what happens. You have a plan for the attack and then you conduct the attack and everything falls apart. All your plans are not made no sense anymore because it's something is different than you plan that the enemy is reacting in a different way that he calls that friction and friction is it's a non ecoticity and non egoicity is patternless. And then humans at all levels of the command chain have to react
to this, to this situation. And that decides in the end who wins the war, who makes a better decision, which is which extremely complicated. And I think that that that there is no pattern to be learned here and that because of this we are not going to see an exponential improvement in the algorithms. OK, so OK, yeah, yeah, I'm finding what you're saying. So in other words, it it reaches a point where it can go no
further. Yeah, this is this is the doctrine of scaling which Musk and and all the other AI gods have assumed, namely the more data we have, scramble, scramble, get more data, even make up data, which is what they do now. And then our machines will become more and more intelligent without limit. And so all we need is more and more data. But that scaling rule isn't, it's been shown in the last few
months in a very clear way. It doesn't hold just as moral law, which predicted a continual increase in power of processors is now beginning to fail because they're reaching the limits of what you can do with tiny engineering.
So scaling the scaling rule, it's it's been abandoned and that was a big shock for open AI, for instance, because they had to reconceive their goals and and they they they wanted to have 4.5 cha GPT 4.5 they they flatter before they got that far, they they gave up, they turned towards other methods, which in some cases are more old fashioned. It's like the new stochastic AI methods that they. Have so I give yeah, to give you an example for for this
limitation. So I yesterday I, I asked Glock, another AI who is better Valentine The Alchemist, the media alchemist, right? And and the the the algorithm said searching the web. Why did it do so? Because they have now built in an algorithm that that tries to estimate the likelihood that the model, the LLM behind ChatGPT can on its own give a good answer. If the likelihood is too too low.
What they do is they search the web with a classical Google search or Google like search, get all the texts from the web describing as a Valentine and gives it to the model asking to summarise the input. Then the the model creates the answer. So now they have already built in deterministic elements into the algorithm because it's overwhelmed and why is it over? Why? Why doesn't it scale anymore? Because the the regularities in the body of the global Internet language have been exploited
now. And so now that the the models cannot become any better. On the contrary wise, now the Internet contains output of these models, but they are pages full of the output of these models. And if you put them into the models again, they deteriorate, Yes. And so I think that all these claims are based on the lack of understanding of the mathematical nature of these models, and only. So basically, Grok, DeepSeek and ChatGPT are just better search engines. No, they are more than that.
So the example I gave only shows that they're not combining different technologies because they have reached the limits of what they can do. No, no, the the LLMS are something new. The LLM, the core LLM idea is that you, that you use the body of human language published on the Internet to train a gigantic stochastic model that can, that can put together sequences of letters that makes sense to humans in a certain context. That is, that's fantastic.
So, So what the model does, it takes a string of letters and, and, and, and continues this sequence in such a way that the continued sequence is the output that makes sense to the human. The machine doesn't understand what it does, but it's an absolute tremendous achievement and a milestone in the evolution of science that that these LLMS are possible. But I I think we should mention also that even at this stage they still produce hallucinations. Yes, this is now a more important point.
Why do they produce hallucinations? They produce hallucinations because 90% of the training material they receive is machine made. And this this if you look at how the algorithms are trained, there are three stages. Simplify Now the first stage is you just give them the global text of the Internet and and then ask them to to to and ask them to create the same output with some words missing. It's called skip graph
technique. The other techniques and and and this way they learn the the structure of language, they are parameterized force. And then the second step you give them question answer pair pairs designed by humans to train them to create based on a question the the right answer. But in the last step you are then creating the machine is creating question answer pairs itself based on the pattern of
the human. And these question answer pairs are artificial and they are and they are the bulk of the training material because the models with millions or billions of parameters are so data agreed that you need trillions and trillions of data points and most of them are created by the machine. It said in this last training step. Now the machine, what the machine creates itself is always of lower quality than what the human creates because it it only basically takes a kind of it.
It cannot take into account the long tail of the distribution. It can only if the distribution looks like this, right? It has a long tail. The long tail is not taken into account, only the first part of the tail. And that makes a poor quality. And the hallucinations come from this step. And also all not only the hallucinations, but also other inaccuracies.
For example, if you would say please summarise, if you would give it not ask it to summarise Dostoevsky Brothers come out of, but give it the entire text and ask it to summarise, it would create inconsistencies. All is all of this is based on if you ask summarise Brothers Karamazov, it will only go to the Wikipedia page and and give what is given there. But if you would give it the full text, it would create
mistakes. And that's all based on this, on this necessity to synthetically create all of training data and the synthetic training data are are always weak. And that is a very, very big problem of these models. The, the, the systems were originally trained to avoid responses like, I don't know, yes, they were trained to be friendly. And so if they if they really don't know, then they make something up and they're still doing that. I call this Hollywood
friendliness, right? So classical German would always prefer to say I don't know instead of lying, right? But in Hollywood it's better. It's better to lie than to say I don't know. But that but that also are trained to censor. Oh, the so, so, so this is this is this is actually interesting that they call this the moderation API. And, and so, so the question is how the models are very biassed, right? So if you ask it how many genders there are, they will
answer hundreds of genders. If you ask what can we do against the climate catastrophe, they will not say there is no climate catastrophe, but they will list a lot of measures, right? So they are very politically biassed. How is and, and if you ask it, can you please describe the positive qualities of Stalin? They will they will refuse to do so, right? Yeah. And and and, and so they are they they are censored and biassed. And how there are many source of bias.
Let me list a few. 1 is that most of the texts that are published on the Internet are also already biassed, so the models pick up the bias of the text. This is especially true with Islam. There are a lot of Islamic texts that are, from a liberal perspective, dangerous, but they are just trained and learn. And so if you ask about Islam, you get astonishing answers which come from this primary bias.
Then there's a secondary bias which comes from the training questions and answers that are given to the model, right? So if you train the model, parameterize the model to give a certain type of answers to a certain type of question, this is an influencing in a deep fashion the parameterization of the model. So for example, the Google Gemini model was drawing black popes and female popes, I think beginning of last year, because it was so heavily biassed.
And then they tried to correct it with new training rounds, but the parameterization was so deeply and rooted into the model that they couldn't get rid of it anymore. And then there's another, yeah, they had to throw away the anti monetary trade, which is actually means throwing away billions. It costs billions to transact a model. And then the, the the another source of bias is of course, the moderation API, which is basically blocking a politically incorrect questions and answers.
Was it was it was another layer of stochastic classification algorithm. And and so this is this is how the models are overall very biassed. I often test that. So on DeepSeek, which we know comes from China, if you ask it is Taiwan independent. It it simply refuses to answer, it will not engage in that discussion. ChatGPT will give you what would appear to be a very politically Western answer. Yes, and and so let's let's review this is not a normal
phenomenon. So if you think of the indulgency letters of the 15th and 16th century, right, was a big industry to finance the bill, all the project of the Vatican. And if you would ask any priest, you know, do I really need an indulgency letter? The priest at at a certain level of hierarchy would always say yes, of course, though even himself may not have believed in the indulgency letter. Why is this so?
Because he was financed by the Church and the church was an oligarchy institution with running a huge budget with a huge organisation network and so on. So the same is happening now, you know, these, these models are not innocent. They cost billions, but only very few people can, can gather this, this amount of money. And of course, they have a certain intention when they spend this money. They don't spend it for the, for
the benefit of mankind. I mean, there are aspects of these models that are beneficial that that's for sure, for sure. But their primary intent is, is, is that they have a private intention, which has to do with, with creating more wealth, more power and so on. And so therefore, the models have to be in line with their views.
Now, for example, you know, Musk is, is is is claimed that his own AI would just I think called Rock is not wrong, but So what he has done, he has changed is a bit so that it doesn't talk about LGBTQ too much. So, but it still has the same problems with climate and gender and so on. So, so basically you can it's it's quite transparent how these models are biassed towards the ideas or ideologies of those who
pay for them. But now when we talk about AI now in the year 2025, we pretty much always only refer to these LLMS. I'm guessing this is a term that has become very misleading. It probably means a whole lot more, doesn't it? So AI contains three types of algorithms. Deterministic AI, which is a good old fashioned AI that was that was invented since 1950s. It's searching, planning, machine inference, all of this is deterministic. This is how the blue, the chess
computer of IBM worked. And this is, this is the first big pillar of AI and that's built into all JGBT contains also a lot of this deterministic AI. The second pillar is the neural networks. It's like LLMS, this is stochastic AI, this is the second pillar and the third pillar is, is a mixed model set that contain a hybrid models that contain both of these types of AIS and and all these get engineered.
And the art of AI engineering is what I haven't been making living off is to select, given a certain problem to solve, the right mix of all these components to get the best algorithm that you can that you can get with today's technology. That's what AI, the field of AI is about. And this focus on LL NS is only so big because they've been since Google trendset was first made available, creating superficial but quite spectacular results. And that's why everybody talks about them.
But, but, but or generative they call it when you can ask software to draw a picture for you, that's quite spectacular. But but but but the eye field is of course bigger than that. Yeah, and with with the pictures. If you ask it to produce a picture with captions, the captions will very often be nonsensical, even though explicit requests. And then the other dimension which Elon is is fond of talking about these days is the robotics
dimension. So he thinks that within N years we will five years, we will have robots that can perform all of the intricate manoeuvres with their bodies that humans can perform. And robotics is embarrassingly slow to evolve. They still look silly. They have a robot dot that looks silly that and that's caves now. And so optimism about robotics too, it seems to me is, is very
much overplayed. The reason why robotics is so slow if you talk to serious robotics experts is 1 core characteristic of humans and animals which is called sensor, motorical or multi sensorial behaviour. So that basically means that while you're performing immortal behaviour like speaking, you're also hearing yourself and that. So this and while you're moving your hand and touching something, you're also sensing the effect of your motion.
And there's a circuit, a very complicated circuit of all the time correcting the motion based on the sensory input created by the motion itself. And it's it's super complex. And this is basically discovery of Charlotte Sanders purse who discovered what's the first to describe this. And then psychologist JJ Gibson, many psychologist took it on also philosophers. And we know, we know now a lot about this. This phenomenon, but we don't know how to engineer it. It's, it's, it's, it's as
complex as intelligence. And without it, you cannot get the type of mechanics or fine mechanic that humans can perform. And so I think this, this belief in the robots replacing humans everywhere is as as as I don't know, narrow minded and and thought sizes. But hold on yours. So somebody might respond to that by saying yes, but this particular car factory which had all these workers now have machines.
Yes, but, but, but the, the there, there was a huge wave of rationalisation of human labour since 1850, right Since since autumn. I don't know when the automatic weaving machines were invented, the unit that was 1850s or so, 1860s and then and then since then there has been of course a massive increase of productivity every year until 2010. And over the last 15 years, though we are now in the big AI wave, we have not seen productivity increase in the
West anymore. And even China, China still has productivity increase because they're still industrialising and, and, but so they have an average. But Germany has not, has not been able to, to have even Germany was the last country in Switzerland to have increments in productivity, which means less human labour for the same amount of goods. We don't have this anymore. And there is, and that's because the lowering fruit have already been mechanised over the last 150 years.
So now there is the room for additional mechanisation of labour also even intellectual labour, but it's in the lower percent, It's up five, 6-7, maybe 10%, but certainly not more. So it's not enough to offset the decline in our demographies, for example. And and so that's because the low hanging fruit have already been taken away. So a better example even in the kind as to where there's still quite a lot of human beings working as a chemical industry. So BSF in the 70s, that campus
had 15,000 workers. Now it has just less than 1000, right? So because everything is now controlled by machines, because chemical reactions can be automated almost perfectly. And now that you have only engineers there, no blue helmets, only white helmets, left, right, very well, these are the people who repair the machines, but that's it. And, and so, so, so the lowing food have already been used up and now it's getting very, very hard to do further mechanisation.
And and it's very naive to believe that what is now left of human labour in a car factory can easily be mechanised, right? It's very, very hard. And and the robotics expert, they go there and look at it and say, gosh, we can't do that, right. And so therefore, the idea that so if we reindustrialize America and put car factories back into America, it means that we use traditional robots in these factories, but not that we use robots that do the fine
rhetorical. Work. But also, it's worth noting that traditional robots are usually welded to the bench chest. And that's because if you have robots walking around, they're too liable to cause accidents, including accidents of killing human beings by crashing into them, because they're they just don't have the intelligent reactions that human beings have when they're engaging with each other. And of course, of course, if, if, if yes, sorry. Go on. No, no, I just. They look silly.
They wasn't. Look silly. So I, I remember 1015 years ago there was this huge programme, it was called Robotics 3 point or 4.0. I don't know which digit they use, but they said now the robot is coming out of the cage finally, because the robots are all in cages and the cages affect the humans who still are the factory. From the from the from this, the movements of the robotic arms have a huge momentum. If you get hit on the head by such an arm, you're dead for
sure, right? Or you had this being cut off. And so therefore they are all operating in cages and, and they said, now we set the robot free of the cage, but it never happened. And if you if you talk to real robotics experts, they can point out in a very detailed way why that is the case. And this problem of sensory motor behaviour is unsolved, and I think it's unsolvable.
You mentioned chess a few times and I love chess and something that I thought about and I didn't realise this until I read your book, but chess has a very large number of, you know, opening moves. And I guess that's part of, of the, the magic of playing the game. It seems almost infinite. But it is, as you say, a closed system. There will be a point at which there are no more new opening moves.
Whereas if I go for dinner with my wife, that is an open system because anything can happen from leaving the house to coming back home. Yeah, like guess they will serve cold pizza. You know you didn't expect it.
I think the the the most embarrassing feature of the the developments in AI in the last years is the failure still to create coherent, friendly chat bots who can respond to telephone calls when you call your bank, that you always have to adjust the level of your voice, the clarity of your requests and so forth in order to get any kind of friendly response from these chat bots. Now that that is a scandal, they've been trying to build those chat bots for 70 years and
they still fail. Humans. You can talk to a stranger on the railway station immediately and have friendly responses in both directions. They can't do that. So, so the reason why this is the case and why they're not solving the, the why they're not winning the Turing test is because winning the Turing test would mean that you could put machines in all these routine interactions like booking an aeroplane, booking a hotel, and that and be sure the machine can achieve it.
That would be the real Turing test. Why is this not happening? Because even in such a mundane and trivial conversation, like booking something, there's an infinite number of possibilities how the conversation can evolve. Now, you cannot train a finite algorithm to do with Infinity, right? And basically the, the, the, the, the way the conversations evolve is non igotic. So there's no clear pattern. And if there's no clear pattern, you cannot cover all the way the conversation can evolve.
And that's, and that's why so, so LLMS, the best tools we have for chatting are trained only for one interaction, one question, one answer. But if you but how did people
make it praise Hitler? By actually talking for a very long time to the model and this way exploring aspect of it's configuration that cannot be controlled in the training, in the in the second and third step of the training, because they are only for one question, one answer, because you cannot train more steps, because there are too many possibilities.
And because of this, this is another example for the, for that, that the models cannot saturate and that their configuration is basically only controlled at the superficial level. Therefore, they, they, they, they can, they can, you can. Therefore, the, the big vendors have also pressed long conversations now because there were too many quote unquote scandals arising. So the, the L&M was asking the, the interactive human interactive partner to suit, to commit suicide, for example,
right? And and this created scans and that's due to this problem. So am I correct in saying then that intelligence cannot be definitively defined and as such cannot therefore be engineered? And if it cannot be engineered, the best that we will end up with is advanced automations. Yes. So the first, the first of your premise is not fully correct. So we can define intelligent behaviour, but we cannot analyse how it comes about.
So we know what intelligent behaviour is, but we cannot analyse how it comes about, how the neurons of even of a of a mammal or parrot create it's intelligent. Behaviour and that that then. The rest of your statements get fairly correct. That involves both spontaneity and novelty. Now that the LLMS are very good if you're asking questions about Taylor Swift because it's full of information about Taylor Swift. But if you ask novel questions, you may get no answers, you may
get hallucinations. You're not going to be happy with the response from a human. Whatever response you get, you're likely to be happy with it. It will be friendly, it will tell you what to do next and so forth, so that it's not a question of defining intelligence, as Europe says, we know what intelligence is. The problem is creating a machine that can act with spontaneity and novelty, and they can't do that. They're machines. They're mathematical algorithm.
Yeah, so, so our culture has, since Descartes has the tendency to overestimate what we can do with mathematics. And Descartes himself is is rather innocent of this. But the tradition that was based on him, Laplace believed that if we could measure all the positions and velocities or momenta of all the particles of the universe, we could predict the future, but also calculate back the whole a past and perfectly. And that's that's wrong that has
so physicists know that's wrong. They know this in many ways. But but but many lay people don't and they overestimate what can be done with mathematics. But if you ask people like Heisenberg or Feynman, they were very, very modest because they were seriously trying to modern nature. If you do this seriously, like I did all my life, you become very
modest. But if you're just, you know, an entrepreneur, engineer who is, who's, who has a lot of money to, to spend, you're not taking the time to think this through. And I think all the barrier I did is just think it through properly at the very high level of detail to get to the bottom of it. And that's what you need to do to answer such questions. And that's what you play to.
And Socrate has an Aristotle, all this tradition we are based on have always tried to really get into the detail. And then you realise it by itself. Then it's not scandalous. Then it's nothing special. It's just how things are. It's just the reality we're faced with, right? Yeah, and it doesn't help to be a Luddite either. No, so we're not Luddites.
The final chapter in the book is all about the wonderful things which AI is already doing for us, in which it will continue to do. Just as all the tools invented through the Industrial revolution have been very helpful to creating our modern technical world, so AI will contribute more to. That so, so, so all this, they are huge success with AI already and, and these successes have been produced by realistic AI users, right?
And, and so the problem of open AI, for example, as a company is that they have now spent I think 20 billion or something more than that, maybe much more, but they have not made any profit. Why not? Because there is no way to use this model to automate, right? And so only with automation can you make money with technology. Only when technology can can do something that that automates human or enhances human behaviour, then the technology is good. Like, like, like construction machines.
They're good because they replace thousands of humans who had to dig with their shovels right now. Now the the tools that open air has been cannot do any of this. They're just better entertainment tools or propaganda tools maybe, or censorship tools. But they cannot create any real value. And so they will, they are in a way about to fail, economically speaking. And this is very important. Good AI, like I'm making it, has always an economic impact.
So I think this is a problem that I can tackle with mathematics. If yes, I build it and they get paid for it and everybody's happy. But I'm not willing to participate in the hype that just burns money without any creating any good. So we've got this little, this vacuum cleaner that that that is automated. It drives around the house on its own and it goes back to its little spot in the corner. Now that's that's quite helpful. Yeah, perfect.
Our lawn, lawn mowing machine, then culture, you know, automatic harvesting machines, seeding machines, ploughing machines, They are wonderful applications of AI, working with satellite data, signals from satellites. It's fantastic, right. So the whole electric grid is full of AI algorithms that control it and prevent blackout. It's everywhere and it's super beneficial. If you subtract it, you, you get a much lower productivity and in the end less wealth. It can be distributed.
So, so it's everywhere. It's, it's, it's a great, it's, it's part of the Industrial Revolution and I'm a big fan of the Industrial Revolution. With the advancement in the sort of digital control grid, as Katherine Austin Fitz calls it, where you'll have a social credit score, for example, where everything that you do is tracked and traced and monitored and, and you're given a score. And let's say you go and put fuel into your car and it says you cannot have any more fuel.
Or you buy red meat and the machine says you're going to get penalised with some sort of carbon tax, whatever. I'm guessing the the reason why that's happening is that that is that is not AI doing it's thing, it's not machines taking over. Those are automations, but are very highly advanced automations created by humans. Yes, and, and, and they're all
required digitization. But I would like to to say that I think this vision that cats and often Fitz is describing and also others are describing, doesn't take into account what we've learned in, in such extreme control situations such as concentration camps or for example, Vienna after World War 2. So when you, when you are in such a planned economy like a concentration camp or Vienna under the under the allied occupation, what happens is that
people evade the control mechanisms and create grey and black markets, right. So human beings. So the maximum enslavement rate of any human society in history has been 30%. Even the Arabic peninsula had only 30% slaves at the height of the enslavement, all Greek and Latin and Roman empires, they had only 30% left. If you have more, the balance shifts so you cannot instead of everyone. So I don't believe that's going
to happen. But if that if if something like this has been tried, people will evade it, right? Like people created for vaccination passports during COVID and and and all invented all sorts of mechanisms to invade. And every system of control can be circumvented if there are people inside the systems who are willing to circumvent. And so that's why any security system of any company is only as good as the loyalty of its, of its employees. And that's, that will happen.
So if if it becomes too dystopian, people will work around it will create ways to circumvent it. And that that's that's why also 1984 and and Brave New World are wrong, because they don't describe the ability of humans to circumvent such systems. And I think what we know from China is that the the rather ambitious accounts of the social credit system, there are in fact massive exaggerations. It doesn't work for most Chinese population members. Yeah, that's actually a really good point.
When you think we, you're correct, people will always circumvent these things. And I guess it circles back to your original point, because machines don't have intelligence and humans do. Well, the, the, the control system is also set up by humans. So then they have a, you have a circle. Very good example is Napoleon rule over France. Napoleon was the first to introduce all the things that Hitler and Stalin also used like Stasi, total control, censorship, gleichaltom.
That means, you know, making all, trying to make everything the same and all the domains of of all the institutions. And it, it was also massively circumvented. So when the, when the French bourgeoisie was fed up with Napoleon, they paid for the British war against Napoleon in Spain. They, they spied on, on the French military and gave all the secrets of the secret plans to, to the Britons, Britons, the French citizens themselves,
right? And so, and so when you try to, to control fanatically a society, you will always fail. And, and the same happened in Germany, right, during World War 2. So it's, it, it, it will always fail. Now it can still create a lot of damage. Why they try to establish it. Therefore it's still something to be very much afraid of and to
fight against. But in the in the mid term it, it will not work because because when a few people use their intelligence to control the mass and the masses understand this and they use their intelligence, they're superior because they're more. And because of the limits of this, of the technology. And the limits of the technology. And as Matthias Desmet said, only around 30% of people will willingly partake in
totalitarianism anyway. Yes, I think the rate is approximately right and and I think it's, it's also seems to be an anthropological constant right over time and and his 30% are very much in line with 30% enslavement rate by the way. Yeah, this is such an uplifting, positive conversation. Please try and explain to me why therefore your book is controversial. Barry, take a shot, please. So the, the, the, the hype factor is important here.
The AI technology today is so expensive, they need lots of people to have faith in it, that they're doing something good, valuable, useful, friendly, entertaining, and so forth. They need it to be a show on the road, which is getting ever more important. We point out that there's a limit that the show it, it will do the things it can do now, but it's not going to get better and better over time.
On the contrary, it's almost certainly reached the point of of most attractiveness and and the point of least. Well, Gary Marcus, who is one of the prophets on our side in many ways. He talks about ensitification. So the Internet is becoming enshitified by the products of large language models, and so the data sources of the large language models are becoming less and less reliable over time. This is a bad cycle which we'll, we, we, we shouldn't talk about. You probably should delete what
I just said. We're not allowed. To talk about the limits of this, because that will take away the money who's going to invest in something which gradually going to, but as as did every AI summer in history. So AI is a series of AI winters. You, you probably have that ancestors right far back, I guess. And, and so you remember that you know the Tulip crisis,
right? So. Yes, in, in in Amsterdam. Yes, So one, I don't know how this this the thing, the part of the troop that's out of the earth, the bulb of the troop, one bulb of the troop was worth a house at the almost a house at the peak of this of this crisis and then it crashed. So I'm I'm certain that this AI
hive will crash. And the question we cannot, you can never know when the crash happens, but it would crash and I guess it would crash when all financial system crashes and, and, and the crash of our financial system is unavoidable. And we'll take with it a lot of hypes because all of them rely on this heroin of, of, of money made out of nothing to feed into all these hypes. And when this money is gone, then it's like the withdrawal syndrome of a heroin addict.
Then suddenly a lot of the realities will bite him or her again, right? And I still, I think that a lot of the hype cycles that we see since the last 50 years are very much culturally rooted in the, in the, in the effects of the Fiat money system as a high action. I was always against the Fiat money system. I think it's, it's very dangerous and foolish to set up such a system. It privileges the wrong people in a society. It has the wrong effect.
I think it's yours, Barry. Maybe you cannot put it on you. It privileges the wrong people. And and I think if this goes away, a lot of these hypes will will stop. Your book Why Machines will never What? What is it? I don't know. In front of you? Rule. Rule the world. Why Machines never rule the world? Where can I get it? So there is a new edition which just came out or which is about to come out any day now. I think today's yesterday came
out. So I'm tempted to say that I will email you the link where you can get a discount and then you put that at the end of the interview. So. But that's the best I can do from here or. Amazon, Amazon. Amazon. Amazon, yeah, they probably actually are copying the discount because they generally do that. So it's appearing Amazon either today or tomorrow. I think it's already appeared and I got the notification of my ex that I would be, they will ship my 10, the copies also today.
So it must have been appeared. And you can order it and I can recommend it. Even a layperson who's not wanting to go into the depths of mathematics and, and physics and philosophy can read the introduction and the final chapter. And the introduction is very clearly readable. And the forward is very important as well. And then the final chapter gives a positive perspective on AI. And then that's all I think all you need is a layperson. It's the last and the first
chapter. Or just use AI to summarise it. So that's the singularity. Yops Landgraver, Barry Smith, thank you for joining me in the trenches. Yeah, thanks a lot for inviting us. Bye bye. Bye bye.