Welcome to another episode of the Secular foxhole podcast. Today we're happy to have Rob Trusinski back with us to discuss two important issues, or more precisely, a course he has created and his essay on robotics. Hi, Robert. Every week I try to find something new by you on substac and discourse. Of course, substac you mail out automatically, which is great. So your articles are always thought provoking and concise, and I really appreciate that.
But before we get to the announcement of your new course you developed about causation, those of us who take Iron rand's ideas seriously know that she has created a secular morality void of any mystical or irrational trappings. What do you think you add to that?
That's a good question. Well, the goal of this course, by the way, thanks for having me on.
Yeah.
The goal of this course is this new course of the Prophet of Causation is what it's called. I can explain that a bit more. The goal of this is not so much to add something new to her philosophy. I think we will probably add a few details here and there and flesh out some things and maybe look at some applications or tactical issues that haven't really been discussed much before.
The goal really is to take the same content, the basic idea of her philosophy and the basic ideas, and to look at them from a different perspective. And it's specifically from the perspective of the role of cause and effect in her philosophy.
This is one of the insights I had in writing a book about atlas shrugged and in thinking more and more about her philosophy is the idea that the central idea of the law of cause and effect of causation runs as a theme through her philosophy, through all the different issues. But of course, when it comes to secular morality, it runs through the ethics is the real central area where that applies.
The idea for this course comes from an article she wrote that I think is one of the most essential things she ever wrote, which is called Causality versus Duty. And it's the idea of explaining the role of causation in as a foundation for a secular ethics. And it's the idea that morality, that ethics, comes from looking at the cause and effect relationships between your goals and your desire to live, and then what actions are causally required to achieve those goals.
And so her view is that morality is all of morality can be summed up as an attempt to understand those cause and effect relationships between the actions you need to take, the virtues, the things that you need to do, and the ultimate goal that you're pursuing of trying to live and prosper and be happy. So it's a set morality isn't a set of commandments handed down to you. That's the sort of duty version. It's not a set of commandments handed down to you from some supernatural authority.
It's a set of real world observations about cause and effect relationships.
Right? So how crucial is choice in her morality?
Well, the thing is that the choice has to be there as a way of deciding what you're going to choose as the goals that you pursue. Right? So you take into account the cause that affect relationships. But then ultimately, one of the causes there has got to be your choice, your choice to actuate, to choose among the alternative actions you could take. What goals do I want to achieve? What goals do I want to pursue?
What are the life course, the values that will make me happy and then saying, okay, what is then required by cause and effect, by the law of cause and effect, what is then required to achieve that? So she gave a great she cites in this article, she cites an old Spanish proverb that says, god said take what you want and pay for it. And that's what she said.
Take what you want it, or make your choice of the values that you think are important to you, that will make you happy, that you want to pursue, and then pay for it by understanding all the causes, effective relationships, all the things that are actually required to achieve that. Now, that implies that in choosing and making a rational choice of what goals you want to pursue, you have to take into account what are the consequences actually going to be?
What do I actually have to do to achieve this? You can't simply say, I want to be rich and famous and not have any idea of what it means, of what that means in terms of what's required for it, and in terms of what that would actually constitute in your life. You would take into account all the consequences, all the things that are involved in that. So it's not just arbitrarily making a choice. You're making a rational, informed choice.
But the idea is that ultimately it comes down to you make a choice of what you want to achieve, and then the law of cause and effect tells you these are the things that are required and that come along with that.
A moment ago, you mentioned the word duty. So obviously I think the difference between objectivist ethics, if you will, and conventional morality is choice versus duty. Does that make sense?
Yes, to some extent. I think that the traditional view of morality, especially the religious view of morality, is the duty. And duty is this idea. You have to do it because well, it's like the old place for the Western. A man's got to do what a man's got to do. You have to do it just because you must. And there's no real explanation for the you must, or at least there's no this worldly explanation for the you must ultimately usually comes down to, well, God said you should do it.
There's some religious there are secularized versions of it, but there's actually secularized versions of her religious morality. God issued commandments saying, this is what you must do, and therefore you must do it. And there's no real why to it. There's no this worldly rational explanation for it. Now, the more recent secularized version of that is a more subjectivist view or a socially subjective view.
So it's the idea that, well, morality comes from a social consensus that society as a whole decides that certain things are important, and therefore you as the individual must go along with society as a whole. So it's sort of substituting society in the place of God as the source for where these moral precepts come from that you have to follow, but making it collective rather than mystical, making it a social thing, but really substituting society for God in that traditional ethics.
Yeah, just recently I've stumbled across another version of that, if you will. It's like instead of saying God is telling you what to do, it's like the universe is using the word, the universe is directing you, or the universe is showing you.
Yeah, well, we live in a secular there was, I think, a piece of the Onion a while back about how yo scientists discovered that the universe exists to tell women in their 20s what to do with their lives. Because this whole figure of speech of the universe is telling me to do X, Y, and Z, which is I think it's a halfway house.
It's one of these leftover lingering effects that people have still this religious mindset or mentality about how you decide what you should do in life, but they don't really have religion anymore, so they vent these sort of substitutes for it. Well, the universe is saying it or it's karma or whatever. But take an example of one of these things. One of the things I'm going to be looking at in this course is, for example, ein rand's view of the nature and source of rights, of individual rights.
And there's this whole philosophical tradition of debating what's called the ontology of rights. Now, ontology just means what is the basis and reality of this thing, actually? What real thing does this actually refer to? So people say the antology of rights. They say, well, what thing in reality are we referring to when you say you have a right to
do something? And the general view on the antology of rights, the usual answer that's being given is some form of, well, society as a whole makes decisions about what to do, about what your proper freedom of action should be. And therefore, the anthology of rights is it refers to a social consensus on how much freedom the individual should have. So this idea of in range view I'm going to expand, of course, upon this in a great deal of detail. The source of rights is causality. It's causation.
It's the idea that there is a cause and effect relationship that in order to be able to survive, in order to be able to grow your food and put a roof over your head, in order to create all the things that we need to survive as human beings, we have to have freedom. And it's that causal relationship between freedom and survival that is the ontology of rights. What rights referred to is that causal relationship.
But that's a whole bunch of stuff has to be established to really fully understand that. And that's what I'm a good person about.
Go ahead, Martin. I'll throw something in.
I'm joking now. So now we have a commercial break here. So we have this value for value. So if you listener so far gets this and see its value, then you should buy rob's course and then you have interesting effects on that. So could you describe it a bit more? How long is it, what's the value, what's the cost? And more what you want to go through? And then, Blair, you will continue.
Sure.
Okay. So the course is called The Profit of Causation. And this comes from in her article, she talked about how the rational man is a disciple of causation. I thought, well, if we're the disciples, then she's the prophet. And prophet really, just in its original hebrew just means messenger, right? She is the message of the person bringing us this message called Profit of Causation. Substance.com is where you can go to check it out.
To create the course, I chose something that's familiar to me as to most of my readers, which is a substance newsletter that's really there to sort of be the clearinghouse for the course and the way of paying for it. So it's $250 for a one year subscription, which every time you sign up, I automatically convert it to a lifetime subscription. But most of this is going to happen in the next three to six months. And what it's going to be is going to be a course on Zoom.
So a subscription gives you access to the course on Zoom. And it's ten classes, one every two weeks. I wanted to have not too intensive schedule, one every two weeks. And you can participate live by zoom in those and be part of the Q and A and the discussion.
Or I'm going to then put the recordings up as podcast, as audio podcasts and as videos, going to load those up by way of the subsequent newsletter and then put additional materials in there, excerpts from philosophers and readings, suggested readings and other little observations. And I'll be answering questions from people, or questions, comments, discussion that we want to have about the ideas in there.
And the whole idea is just to take this idea of causation, starting in the first couple of lectures with the very question of what is causation, what is the law of cause and
effect? Because there's enormous philosophical confusion even today about what the idea even means, and then go and see how it affects her view of how the mind works, her view of human nature, which I think is something that has not been really spelled out in objectivist philosophy before because I think there's a very common view of what human nature is that is very different from her view. But I haven't really seen it sort of spelled out in detail what that difference is.
And that's one of the things I'm going to be going through. And then we go through ethics, politics, her case for property rights, which I think is also something that has not really been spelled out in the past, and the politics and finally even the aesthetics.
Okay, great. And your proposed start date, the first.
Class is going to be february 28 is Tuesday. We're doing a tuesdays eight to 09:00 p.m. For the class. It might go a little longer. If people have questions and discussion, it might run a little longer, but I want to try to have the meat of it within an hour there so it's not too big a drain of people's time. And then it's going to be every two weeks after that.
All right, I want to jump back. I'm not a professional philosopher, and you talked about ontology a moment ago. So what is deontology? Is that the opposite of well, it's not the opposite.
It comes from a slightly different Greek word. The auto part has more to do with the structure of Greek grammar. So in addition to how to put it in, I'm a semi professional philosopher. Okay. I have professional training and philosophy. I work as a columnist or writer, mostly commenting on politics. But that involves using a lot of philosophy.
And this might have to also gauge more pure philosophy than I've done that I do usually of taking some new observations I have about the philosophy of objectivism and trying to develop those more fully. In addition to freshwater training as a philosopher, I have training in classics. So the Antolo part comes more from Greek grammar than from the meanings of the words. But deontos, it comes from the Greek word meaning to bind. Right. So the idea is it's something you're bound to do.
This is something you're required to do. So that ontology comes to the word to be. So it just means the being of something, the existence. What is it in existence in reality that you're referring to in concrete physical reality that you're referring to when you say something? deontology comes from the idea of being bound or required or forced to do something. And that's the basis for that duty centered ethics.
How compare you that to the is and what comparing that to which the is and what yeah.
And the is gap is going to be a central idea we're going to be looking at. So this is the idea. The terminology for this comes from the 18th century philosopher David hume, who said, well, look, when I look at moral philosophers, I see them make a bunch of statements about a bunch of is statements. A bunch of statements. About the way things are in reality. And then suddenly they switch to making a bunch of odd statements about the way things ought to be and what you ought to do.
And there's a gap there. There's a jump they make. They still from Is and they go to ot, and they never showed the connection between the two. And he basically said, there is no connection. Right. So the odd is really just you expressing your personal subjective preferences and not something based on reality.
And so in talking about the role of causation and Iron Range philosophy, and especially this particular article on causality versus duty, I show how causation, the concept of causation, is her answer to the Isaac gap. And by the time you really understand that idea that as she's putting it forward, you realize that the whole question just goes away. It disappears.
It's one of these things where you see these sort of drawings they do, where if you look at it from just the right perspective, everything resolves and it makes sense. Right? And that's what I think about this idea of causation and causality, the law of causing effect in iron's philosophy. It's just this perspective where when you understand what that perspective is and see things from that perspective, there's all these philosophical puzzles and conundrums that just disappear and get resolved.
And suddenly suddenly you can see the vase and the two faces.
Symposium.
For symposium, I use rubin's vase, the drawing that's either a vase or two faces. I use that as the logo.
Yeah, that's great. Yeah. Listen, I just thought of this and I want to throw it out there. Just stay on the philosophical track for a moment. I think Dr. Benzwanger described how Iran solved the problem of universals in like, I don't know, ten minutes or five minutes, just sitting there thinking about it. Apparently, what are universals? And apparently they've plagued the plagued philosophy forever. Do you want to jump into that.
Or do you want yeah, he would do a lot more because he knew Iron. randy well, that's true.
Yes, that's true.
He would have much more knowledge of exactly how she did it. But one of the things I want to talk about, and I'm going to mention this it's something that sort of informs the word informs that way. It's something that influences my approach to philosophy, is one of the things that always bugged me about philosophers is the way they use the word problem.
They have the problem of universals, the problem of induction, the problem of free will, etc. It and what the word problem always means when philosophers use it is, here's a thing that undoubtedly exists and is real, but I can't explain it, and therefore I can't accept that it's real until i, the philosopher, can come up if I, the philosopher, can't come up with that explanation for it, then it's not real. And that's sort of the mentality behind it, right?
It's like I, the philosopher, has to be able to come up with that explanation before reality depends on me to explain it, rather than me having to be the servant of reality. I'm working on this formulation. I'll get a better one by the time the class starts. But that's sort of what I'm getting at, is this idea that reality depends on what's going on in my mind and not that my mind is here to understand reality.
And I think that explains a lot of the conundrums that philosophers have gotten themselves into. Now, part of the conundrums are that some of these are really legitimately difficult questions. They evolve a lot of complications and until you get on quite the right perspective, it may not make sense, but a lot of it comes from the fact that philosophers have had this attitude that ideas come first and reality comes second. Right?
Yeah.
I think the problem with intellectuals is that they're interested in ideas as opposed to being interested in the things in reality that the ideas refer to. That's why this formulation of the problem of X and the problem of Y has always kind of bugged me because it always comes across as and in practice is often used as if I, the philosopher, can come up with enough confusion, enough intellectual confusion around this issue. Then I don't have to admit that this thing exists and they could stay.
In academia for a long time.
You could endlessly discuss the problem and you never have to come up with an answer to it, right? You never have to solve it.
You use the word servant a moment ago. I think observer might be better observer of reality or maybe.
Rule the nature that's good. Saying that he has about nature.
Nature to be commanded, must be obeyed. And that's the sense in which I mean being the servant of reality, that it comes first, it sets the terms and you're the one coming along trying to say, okay, how can I understand this thing? And it's my job to understand it. And I think too many intellectuals and philosophers don't view it that way. And if you don't view it as this thing is real, it is my job.
For example, take the problem of universes, the fact that human beings make observations and they arrive at generalizations and those generalizations are valid and allow them to create all sorts of amazing things. That is just an undoubted thing. It is a fact that you can observe just by looking around you in the world. We're surrounded by the products of all these concepts and universals and ideas that people have come up with.
And it's very clear that we are in fact capable of making generalizations and capable of using them and they're valid enough that we're able to reshape the entire world and make our lives better by doing that. So as the philosopher, you should be coming in and saying, okay, we know we can do this, let's figure out how. But too many times that the attitude has been, well, maybe we can't, and maybe I can tell the arguments why we can't.
And the first precondition for solving a problem like the problem of universals is that you want to solve it, that you think it is solvable and that your job is to come up with a solution rather than it being your job to muddy the waters.
Coming tracing back the whole way to plato. And we are still in the cave. Are we doing this podcast now or are we bubbles? And that is coming to the robots later on.
That might be a great transition. Well, I think the plato thing is interesting because it talks about the role of philosophers, is that philosophy came out of the thing that came before philosophy was religion. Religion was the first attempt to come up with big explanations for where did we come from, what is the nature of the world, what causes things to happen? All the earliest ideas of cause and effect are religious concepts.
And so the problem with religion, though, and that always holds back those answers, is religion always involves this idea of access to a hidden reality that's behind reality, right? Access to a secret knowledge that is only available to the priests. And that's sort of inherent in the whole concept of religion, is that you just by going out and observing reality, you can't get the answers that will fool you, that will deceive you.
There's a hidden reality that is a secret reality that's only accessible to the priests. And plato, who's one of the very first big figures in the history of philosophy, he basically imports this into a secularized philosophical form with this idea of, well, you're like someone trapped in a cave, looking at shadows in the cave wall and not seeing the real reality. Whereas I the philosopher, I can turn around into this realm of pure abstraction and see the secret reality behind reality.
And I think this is the idea that has just hobbled philosophy from the very beginning and created a lot of these quote unquote problems and prevented people from solving them.
Well said, well said. All right, well, why don't we sum up your course and when it starts again, and then we'll jump into your essay on robotics.
Got to give the url again. Go to profitofcausation substance. You can also go to Trasinskyletter Substance, where I have an announcement for it, but go search for that. You will find, and I hope you guys will provide a link for it in the show notes. So go there and you can check out the description of the course. It's going to be ten classes over a period of 20 weeks and then related materials that come from that. It may eventually be a book and any updates I have on that and additions.
If you sign up for this, you'll get that as they come out. And it's mostly to look at Iran's ideas from this new perspective, I think a very clarifying perspective that really helps you understand it on a deeper level. But along the way, I think there's a couple of things on volition, on property rights, on the nature of causation itself, where I think we're going to spell out a few new things here and there that maybe have not been discussed or drawn out from the philosophy before.
That begins on the 28 February.
That's 28 February, about two weeks, a little under two weeks from now.
All right, well, your article on robotics, which was published in Discourse magazine entitled Why Robots Won't Eat Us, I think that was certainly an attention grabbing title for me anyway. And the subject of robotics are robots. I said for years that I've known people that are afraid of this robot takeover or afraid of robots, period, and that they just don't see that, well, human beings program robots, so how could they won't take over. So does that make sense?
I mean, we're the ones that programming the robots to do what we want it to do.
Yeah, certainly in my article I say there are three things we have that machines don't have and really can't have by their very nature that give our minds special qualities that are able to do things that machines are never going to be able to do. And those two things are consciousness, motivation and volition. And we want to start with the first one, consciousness. And that comes from the fact that really connects to this idea that we program the robots.
So one of the things I want to point out is that consciousness, in this context, what I mean by consciousness is something very specific and very simple, which is a machine, an AI program like Chat gpt or one of these other AI programs that's out there. I guess bing now has one that they're using the Microsoft search engine. And these AI programs are not doing what we're doing, which is we're walking around in the world observing things, interacting with them.
We are out there in direct contact with reality. These programs are not in direct contact with reality. It's a program on a server somewhere, and it has only the data that its programmers have decided to feed to it. And in fact, in many of these AI programs, they're initially trained on data that specifically collected and organized and sorted for them by the researchers. So basically the researcher goes out and using human intelligence, they will pick out, okay, here are photos of birds.
One of the things examples was a program trained to recognize types of birds. They think, okay, all these photos of birds. We're going to find thousands and thousands of photos of birds and then we, the researchers, collect them together and categorize them and label them and then we feed them into the machine and then it learns how to recognize the bird.
But what it's doing is it's basically learning that when I get this pattern of data that matches this other pattern of data that's been labeled by a researcher in a certain way, then I say, oh, that's a warbling titmouse or whatever.
Yeah, whatever they're sworn.
Very good. I'm not a bird watcher, so don't ask me. Bird come up with bird names off the top of my head. But the point is that the way I make the safe description is that human beings go out and we observe things, machines are fed data and that observing things versus being fed data is a radically different way of having a connection to the world.
Now, that ties in, of course, to the idea of motivation and volition because the reason we have this in having this direct contact with the world, part of the reason we have that direct contact is we are moving ourselves through the world and we're moving ourselves through the world in pursuit of certain goals and certain needs. And that's where motivation comes in. A machine has no motivations or goals of its own.
It can be programmed mechanically to do something by the researchers, by the people who are creating it. It itself has no needs. It makes no difference to the computer whether it identifies the bird or not, right? And it makes no difference to computer whether it's turned on and working or shut off and not working. The computer has no biological necessity for action. And that's the real theme of this.
And I think this is one of the big connections to the course I'm giving, which is one of the major themes in this course on Causation is going to be how the biological imperatives of human existence run through all these different issues. That to understand human consciousness you have to understand it as a biological biological function. It's biological role in this cause and effective relationship.
And so the idea that human consciousness is biological is one of the major themes of this article about AI is that I think we underestimate how the biological nature of human consciousness is so crucial to its function. And so one of those ways that's crucial is motivation. Motivation is what keeps us moving. It's what causes us to connect some ideas together and not connect other ideas together.
It's what causes us to have to get something right as opposed to not caring whether it's right or wrong. And that's one of the big things that came out of Chat gpt is that people discovered chat gpt will simply make up facts and make up references and make up it'll give you footnotes to articles that were never written, in books that never existed. Because it's programmed to give you a convincing sounding answer, not to give you a right answer.
Oh my gosh.
The thing is that when you don't have the fundamental motivation of a human consciousness is, I have to understand reality. I have to get this right or else the lion's going to eat me. So that we have that fundamental motivation to get things right and to have our ideas correspond to reality. Whereas Chat gpt, if you feed it the right prompts, it's a lot like the ultimate sort of antithesis of a living being is a rock rolling
downhill, right? When a rock is sitting there on the side of the hill, and if it gets hit by another rock, it just starts moving and it keeps going for as long as it doesn't, and it goes off to one side or another, depending on what obstacles and what bumps it hits on the side of the hill. It doesn't direct the course of its own action. It just sort of rolls buffeted about by the outside circumstances.
And you can kind of see that with Chat gpt, that once you get it going with the right, we're feeding it the right or probably better, but feeding it the wrong prompt, a prompt that sends it off the rails. It'll just go off the rails and keep going and not really be able to correct itself because it doesn't have that motivational function of a living consciousness.
Is that what you meant by motivated reasoning, or where did that term come from?
Well, motivated reasoning has come to be used recently. It's a term that was used specifically to refer to something that's not a good thing. It's motivated reasoning is when you engage it's what we would call rationalization. It's when you engage in reasoning in order to support a preordained conclusion rather than in order to get the right answer right.
So motivated reasoning is I have a pre existing commitment to a certain political position or to belief in God or to some other sort of ideological viewpoint. And then all of my reasoning that I engage into is going to be really motivated not by a desire to reach the truth, but by a desire to maintain this position that I already have a loyalty to.
Now, I point out that, okay, motivated reasoning in that sense is bad, but in a larger sense, all reasoning is motivated in the sense that we need to discover the truth. We need to discover reality. Because if you don't, as I put it earlier, if you don't understand what's going on around you, if you're not properly oriented to the facts around you, you're going to walk off a cliff or you're going to get eaten by a lion.
Or to put it in more modern terms, if you don't understand what's going on around you, you're going to invest in a ponzi scheme, or you're going to have this many ways to walk off a cliff in the modern world, you invest in a ponzi scheme where you'll end up voting a socialist dictator into office. Or these are real life examples. I mean, look at Russia right now. Russia is a country that has just collectively walked off a cliff.
Millions of people in Russia walked themselves off a cliff because they put their trust in a strongman, an authoritarian dictator. Now they have tens of thousands of dead and a collapsed economy, and consequences are going to reverberate for generations, negative consequences. And so people can walk themselves off a cliff if they don't orient themselves correctly to reality. And so that's the sense in which reasoning is fundamentally motivated that you need to think in order to live.
Let me add one more example allowing Chinese spy balloons to go unmolested over the United States.
The spy blue story is a farce that could not have been written in fiction. It's like a parody of bureaucratic incompetence run across because now first we let the spy balloon come over and then everybody goes nuts and criticizes, oh, how come they didn't shoot down the spy balloon? So then we started shooting down everything in the sky.
The latest thing is that they found out that one of these things that was shot down over Alaska was a balloon put up by a group of hobbyists in northern Illinois who they got together as volunteers. They put up their own little miniature weather balloon and let it coast around the world. And here we are shooting it down with $400,000 sidewinder missile to shoot down this totally innocuous thing.
It's a hilarious example of sort of government run amok where they miss a problem and then they overcompensate by.
Cleaning everything out.
Now, President Biden had a thing in a press conference. He said, yeah, we need to clarify these rules and do a better job of coming up with the system for identifying what's innocuous and what's not. But it's an example where they didn't have any procedures in place, so they just sort of careened around at random.
For a while and it's only going to get worse. Oh, my. Getting back to your article, what do you mean by the power of no? Is that still part of consciousness?
Okay. That's, I think, one of the most interesting aspects of this, and it's something I'm good to explore more in the course, which is the role of volition and consciousness. Now, the role of volition presents all sorts of conundrums or paradoxes when it comes to talking about causality and cause and effect because typically people said, well, if the universe is ruled by cause and effect, you can't have volition. Your choice would have to be ruled by cause and effect.
But I make a point about how the tremendous the biological function of the crucial function, cognitive function performed by volition when it comes to human reasoning. And that is I mentioned before about how Chat gpt when you start going off, you give it the wrong prompts and you sort of start running it off the rails. It sort of goes like a stone rolling downhill. And partly that's because it has no motivation, but that's also because it has no choice as to how it goes.
If you find, I think all of us have found that we have an idea that seems like it's plausible, it seems like it makes sense, and we start thinking about it and then at some point we realize, you know what? This isn't really making sense. I've gone off on the wrong track, and at some point we're able to say no, that's not really carrying me in the right direction. Let me get back on track. We're able to say no to an idea or to a chain of reasoning true.
And that is so crucial for being able to keep yourself oriented to reality. And so one of the things you can observe is if you've ever seen a conspiracy theorist coming up with his ideas or coming up with the rationalizations, conspiracy theorists have this sort of thing where it's like they've abandoned that power of no, that ability to say no to themselves. So they'll say, well, look, put two and two together, follow the trail of breadcrumbs.
They like to talk about following the trail of breadcrumbs, following the clues. And they'll follow a trail of clues off to completely nonsensical conclusions of oh, well, see, you can see now it's a secret conspiracy and it's the freemasons and the illuminati working to hide the evidence of such and such that come with these really bizarre conclusions and very implausible and theories for which there's really no evidence.
But they'll do it because they start following this little chain of reasoning and they keep following it and they keep following it without being able to stop at some point and put it to the test of reality and say, wait a minute, wait a minute, I've followed a wrong trail, I've gone off the wrong track. It's a similar thing. You'll notice that's characteristic of schizophrenics.
And these are people in whom that ability to monitor yourself and control your chain of reasoning has actually physically been broken. And we don't know the exact mechanisms but something has gone wrong in the brain and they can't stop from making these associations and they'll make these long chains of sort of nonsensical or tangential associations and they'll just follow.
That off into spinning these weird and wild castles in the air, these delusional theories, because they have that the ability to say no, to stop the chain of reasoning has in some way been broken. And that's what I mean by this role of volition, that being able to choose, I'm going to follow this path of reasoning, and not that path is crucial to being able to keep yourself on track and keep yourself connected to reality. And that's one of the big things that machines don't have.
And that's why I think that this sort of overly apocalyptic view of, oh, the machines are going to rise up and they're going to rebel against us, and they're going to control us. Well, the machines aren't going to have volition. They're not going to have the power of choice. They won't want anything. They won't be able to choose to do anything. An AI program can go off the rails. We're seeing that with some of these chat bots, but it goes off the rails.
Not because it has some malevolent will behind it, but because the programmer messed up and didn't program it properly or didn't put enough guardrails, or because somebody is trying to test it and poke it and see how they can get it to fail.
Well, that's probably why. I mean, they didn't do that. That's why they forgot the quality control aspect of that.
Actually, I think what they're doing actually, is by releasing something like chat gpt out here, what they're doing is they're saying, okay, we've tested it, now we're going to give it over to the Internet every day as a whole of the Internet to go and see how they could make this thing fail. And then that's going to give us a way more information on the failure rates and the failure modes of this thing.
Well, that's probably a pretty good idea then.
Yeah. And I think that will actually lead to progress, because the ideal here, and the positive side of this is the ideal is to get the Star Trek computer, right? So we've all watched Star Trek where Captain kirk or Captain picard says, computer, tell me about such and such. And the computer almost immediately provides an answer that is clear and concise and correct. Often it's a very complex knowledge retrieval kind of task.
And that's the ideal that we've been working toward with Google search engines and all this sort of thing, is the idea that you can just say, hey, Google, tell me about what was the Polish lithuanian Commonwealth? And then suddenly, poof, up comes this concise description of the history of the Polish lithuanian Commonwealth. One of my favorite little historical tidbits, because it's very relevant today because it included Ukraine and it was united against Russia.
Anyway, basically what we're doing right now in Eastern Europe is reviving the political thruanian Commonwealth, okay? But it was also an early sort of quasi democratic or federal system that they had an elected monarch and they had a legislature of sorts. Very early on, in the late Middle Ages, they had a sort of quasi legislative system, a very fascinating history. But the idea is you could go to Google and ask them for that.
And the ideal thing is that you have this AI system where you could just ask a question, and it will give you a concise summary and it will be correct. And right now it will give you a concise summary, but it may not be correct.
Okay, yeah. Well, we talked about people's inability to say no, if you will. What is the actual potential of machines assisting humans? I mean, we've also touched the ultimate goal is, like, the Star Trek computer. What did I adran say about their potential?
Yeah, well, I did a little sort of take off on something she said not about artificial intelligence, but about machines like motors and engines and circuits. She called it a frozen form of human intelligence. And I think that's an interesting analogy.
It's a machine that's a frozen form of human intelligence, meaning some one person figured out the principles involved, figured out how to achieve a certain task and then froze that knowledge in the form of machine that then is capable of doing that task, of performing that task. So it's a frozen version of somebody has done the thinking and then that the product of that thinking has been frozen into the form of a mechanical contraption mechanical device that will perform a certain task.
Now, I describe artificial intelligence, the promise of it as being human intelligence in liquid form, which is it's a more subtle remodel. A liquid can conform itself to the shape of whatever container you put it in. Artificial intelligence can alter itself to whatever task you want it to do. So the technology I use here is that if you are trying to solve a task by creating automation in its current form is you create a machine in order to make something or do something.
And if you want to do something a little different, you have to go back and you have to redesign the machine. Right? So a human being has to come back and say, well, okay, how do I change this machine in order to make two divots? When I make this widget and it's going through a press, how do I make two grooves in it instead of one?
How do I redesign the machine to do that? And ultimately the idea is that you'll be able to instead of having a human do that, you'll be able to have an artificial intelligence device that says, oh, that you will say, look, redesign the machine to do that, and it will redesign itself to do that. And where that's closest to fruition right now is actually, I think it's very interesting as a tool for programmers.
So chat cpt is associated with something called copilot, which is a tool for programmers where a programmer can basically say, design me a program or a subroutine to perform a certain task in the program and it will go out and using what's already known about how these programs work and what accomplishes what the AI system will go and create code that does that. And so somebody said the promise of this is that in 2023, the most common programming language will be English.
So the idea is that you can go and you can say to a computer in plain English, make me a program that will do X, and then the AI can go make you a program that does that. And apparently they have a version of this that actually works quite well. And they found that programmers can do twice as much work in the same amount of time using this because it automates the creation of certain kinds of programming of parts of the program.
Now, you as the programmer, then you have to exercise your intelligence. You have to test it. You have to make sure that it's done the right thing, make sure that this actually does what you want it to do, that you got the right result. In the same way that if you go to Google or if you go to Chat gpt and you get a result, you have to go check to make sure you got the right result.
But the idea is that by taking a lot of routine, sort of routine thinking work and automatizing it, you as the programmer can accomplish a whole lot more. You can do twice as much work or with future versions, maybe three times or ten times as much work with the same amount of effort. I think that's the huge promise.
That's great. That does sound great. That's great. Martin, you want to take over with your call to action?
Yeah, thanks for that. So the call to action is to go to podcastapps.com. That's a new domain that Adam currier and Dave Jones of podcasting 2.0 Initiative have got. And there you could have a list of different podcasts, for example, fountain and where we recently got now comment on a clip that I did on Ran. Stay boosted to podcasting 2.0 podcast with 22195 satushes. And if you convert that in fiat currency, it's about $50.
And then comment on that clip that I made and he asked about suggestions for more books by Rand. And he sent rows of ducks or ducks in row in fiat currency around $0.50. Like the so called artist, right? And then he says, Happy belated, Rand. Stay to you both. What are your personal favorite books by Rand? I have read both The fountainhead and atlas shrugged and other recommendations. Mere mortals, podcast. So that's his digital telegram with Satushi's and the question.
So, Blair, should we let Rob to answer that also directly?
Well, if he wants to chime in, sure. But I would just say read the other two novels, Anthem and we the Living.
I'm reading your mind.
Start with as far as nonfiction, start with The Virtue of selfishness. What do you say, Rob?
Well, I've got to recommend one of the things that happened is a lot of her nonfiction articles that she did, philosophical articles she did were gathered together in various anthologies.
That's true.
One of the last ones that was done during her lifetime was called Philosophy Who Needs It? And notice there's no question mark there she actually is telling you she's going to tell you who needs it. It's a statement, not a question. And that's the collection in which this article that I'm based the course on, which is Causality versus 2d, that's where that was published. It has some of her most interesting philosophical writing and that's a big favorite of mine and one that I would recommend.
Great. And I will also say that we got during this period since last time we published an episode, two more supporters of users of fountain and also a guy called Yastru that streamed satoshis also in the past. So you could listen to podcasts and stream and own satucis on fountain. But also then say, I want to stream like $0.10 every minute to podcast making this choice and getting this effect, but to support in the podcasters.
And then also a fellow objectivist and my friend here, roland horvat, now originally from hungary and now in Spain, also sent some streaming satoshi. So thanks all for that.
Very good, Robert. I know we have to wrap up. So once again, it was great pleasure having you on and thanks for manning the fox.
Yes, thank you very much, Rob.
It's a pleasure talking to you.
Thanks.
Thanks, man.
Thanks for now. bye.