There's one welcome to nontrivial. I'm your host, Sean McLaren. This episode I talked about facts and logic when they work and when they don't. I think this is a really important topic, particularly nowadays, as we get into conversations around politics and science and religion and social issues. It's very challenging to work towards a resolution we all have our own worldviews. We have our opinions, and it's not always easy to find a common ground.
So I want to cover the power of logic and how that can help us. Particularly how can structure arguments How to know when they're going well and when they're not going well, and how to use that to compare? Or two people are saying, but I also want to cover the severe limitations of logic where it falls short, how it runs into a wall, and why that is.
What is the mechanism behind logic falling short of giving examples of real arguments and talking about how to work towards a resolution, how to balance both the power of logic with a deep appreciation for its limitations? We've got a lot of material to cover. Let's get started. So this episode is about facts about logic, about how to structure arguments. I think it's really important to go over this. You know people are getting into debates and arguments all the time.
You know, and especially if we look at a lot of current events, it seems like it's really hard to ground our conversations around any kind of structure or any kind of sense of how do we know if a good argument is being made, or if it's a bad argument. How do we compare them? And is that enough? Is it enough to you know? Is rationality and logic the full picture, or is there, you know, kind of contextualization that happens around conversations that makes it difficult to use things like logic?
What are its limitations? So in this episode we're going to take a look at facts and logic, but this is how I'm going to do it. We're going to do in the first part, will take a look at the power of logic, more specifically how to structure an argument. How do we know if we're doing a good argument or bad argument and will take a look at some of the policies that arise? Because I want to give a kind of a utilitarian approach or framework that people can use.
To have a rational approach in their lives or to use logic. And So what I mean by that is I don't want to have this big kind of academic conversation about what logic is in all its different facets and kind of get caught up in all that jargon, because quite frankly that's not useful. It might be interesting to the academic, and it is interesting actually, if you go deeper into the realm of logic, go read Wikipedia articles, go read books on it. It is interesting.
It's a field that exists for a reason. It falls under, you know, kind of overarching field of Epistemologie, which is how do you know what you know? And so I think it's important, but that's obviously that's beyond the scope of what I'm going to talk about in this episode, and it's not really that useful to everyday people, and I think that's why a lot of people leave logic kind of on the back burner. Or they just don't use it.
Because who wants to get into all that kind of esoteric, multifaceted kind of rule based approach to understanding how people should have conversations? You know, that's not fun. That's not interesting. And who's going to really make that time investment? But that doesn't mean you should disregard logical together. There is a real power to it, and I think it's important that we understand that power. So here's how I'm going to do this.
I'm going to begin this episode with a look at, you know, kind of what logic is, or more specifically, how to structure arguments and why that's important how to do it in a way that you can do that everyday, how to structure your arguments and have a sense of when it's good and when it's bad, or when someone else is good and bad. Kind of give you a basis of comparison between the different claims that people make.
To to ground that in reason, right to ground that in logic, and then in the second part I'm going to look at the limitations of logic. And this is at least as important as the topic of logic itself. Where does logic fall short? Where do fax fall short? How does you know contextualization of facts and the interpretations of the narrative that we construct? What? How close to those sit to reality? And how do you know how close to setting it to reality? When does it make sense?
Logic to use logic and when does it make sense not to? And then in the third part will take a look at actual debates or conversations. I'll use a few examples of arguments that people might get into a debate that people might get into, and then we'll take a look at how logic can help with it, and then we'll take a look at how logic is limited, and then in the 4th part I'll wrap it all up by looking at a resolution. How can we strike a balance between?
Anchoring our conversations around you know some kind of logical framework, but also appreciating the limitations. You know, finding a way to strike a middle ground logically by stripping away a lot of the context and then start adding some of that context back in to kind of see where the differences are. And really what we're working towards is not.
Something where we agree on every little piece which you want to do is work towards a mutual respect and understanding of why someone might have a different opinion than you. And just to know what it is we agree on what it is we don't agree on and hopefully move forward. From that point, because if that is a resolution, if there is a way to get a resolution that maybe we can come up with policies or guidelines, or at least have a mutual respect for each other going forward.
So that's what we're going to do it before main pieces of the episode. So let's jump into the first part, which is to understand the framework of logic. Or I'm calling it a framework in the sense that you know whatever it's mean pieces and how can you use it in everyday Life OK, So what is logic and what are facts? Let's start with let me just give you a quick definition of a fact.
And then I kinda pull away from that and then focus on the logic and then I'll talk about how facts get folded into the structure of logic going forward because we ended up we end up using facts or trying to use them in the arguments that we structure. So bear with me for a second. So a fact, look it up in the dictionary going to definition something like a thing that is Nona proved to be true. I think that is not approved to be true. I'm more casual.
Definition might be a piece of information used as evidence, right? In fact is a piece of information used as evidence. Now that word evidence is something I'm going to unpack throughout this episode because. It's got a lot of uncertainty around it. You know what does it mean for something to be a piece of evidence? How powerful is it? You know how well can you use it? I think this gets used incorrectly a lot by society at large, and Even so called experts within their fields.
They might take something kind of at face value as evidence without questioning it, but is that really supporting their overall argument? So that's kind of like a piece of fundamental piece of how we structure arguments. Is this idea of a fact? And we know we hear this word through media, particularly in the last three, 4 five years. You know what is a fact, false fake news, and all this kind of stuff, so it's definitely relevant, and we want to think about what it means to use a fact.
But let's take a look at logic. Uh, as a framework itself, and then we'll see how we fold facts into that. So logic has a lot of different areas, and I said. Just earlier that I don't want to take an academic approach to understanding all the different pieces. You know, there are logical forms, their syntax or semantics is formal proofs.
There's types of inference, there's paradoxes and Fallacy's, then of course you can break your reasoning down to deductive reasoning and inductive reasoning, and there's these different forms. And again, I'm not going to go through all that. I don't think you really want episode on that, maybe you do, but you can go read all that kind of stuff. It's not. There's not a lot of utility to picking conversation apart at that level of detail, right?
We want something that has utility to want something we can use. As an anchor in everyday life to our conversation. So if we think about logic in that sense. That it really comes down to this idea that people are making grand conclusions all the time, right? Big statements about things I might be able to climb. It might be about politics. It might be something that's not so charge.
You know that maybe doesn't get people's Maybe it's just baking your understanding of what makes a good chocolate cake. I mean, it really could be anything, but we do this all the time. We make these statements and a lot of times we don't necessarily back them up, right? We kind of take them. Uh, for granted or we maybe we were arguing about them back in the day and and they just kind of became part of our worldview. And now we don't really think to defend them anymore.
We just kind of make make these statements. And so maybe we do this, you know, on social media. Maybe we do this with friends. Maybe we're going home for Thanksgiving and talking politics with her family for better or worse. And we're kind of just making these statements as if they're fact and then we get into these arguments. And of course, we know how this goes. So how do we kind of step back from that? A bit objectively and say, what is it we're doing? We're making these statements.
Well. Or maybe I should say what is it we should be doing? But if you think about the big statements you make, you can kind of think of those as conclusions, right? I conclusion that would otherwise be part of a more structured argument. So if I say we're releasing too much CO2 into the atmosphere, well. And I basically just stated a conclusion, but you know why don't you back that up? I mean how would I back that up right?
You'd want to have a way to take a more structured approach so that if somebody challenges that. Uh, they they would have a way to challenge that they would have a way of picking apart and questioning what it is you said. So let's take a look at the structure of an argument, and that's really the approach to logic. I want to take in this episode. So let's focus on the structure of an argument structure of an argument is what it is.
The relations that lead to the acceptance of your conclusion based on a set of premises, and those are other statements you make. OK, it's the relations that lead to the acceptance of your big conclusion on the basis. So we set about the statements, those other statements and your conclusion or sometimes called propositions. But let's not get caught up. In the jargon. The point is, if you are making a grand claim about something, any claim.
If you want to back that up, then you have other statements that come before the claim you have other statements that come before that conclusion that you're making. Those other statements are called premises. So when we think about conversation when somebody makes a statement, when somebody makes a claim about something. That sounds kind of like it's a conclusion. If you want to question it would be right to say well what are your premises right? How are you backing up that conclusion?
Right frame what you're saying in terms of the structure of an argument is an argument. Has one of our premises that lead to the conclusion, so I'll give 2 examples and examples. I'm going to give. Our one is an example of deductive reasoning and the other is an example of inductive reasoning. Again, I don't really care if you probably already know this, and if you don't, you don't have to walk away with these terms. That's not really what's important.
Deductive and inductive or two, just different ways of forming an argument and deductive. We go from something general and then we narrow it down to something specific and inductive. It's the inverse of that. We start with something very specific and then we generalize out to something you know overall or something that is broader. And we do this all the time. Anybody.
Anytime somebody is actually structuring an argument, chances are they're doing something deductive or inductive, so let's just give an example so we can look at the structure of an argument itself. So if I'm giving a deductive argument, I might say something like. All men are Mortal. Herald is a man. Therefore Herald is mortal. OK, so you started with all men are mortal. This kind of big set of possibilities. And you said Harold is a man. So he's an instance of the thing that's in that set.
And then you said there for Harold is mortal. So you can see how we're taking some kind of general statement at the beginning, all men are mortal. I have another premise. On top of that which is Harold is a man. And then you can see that based on those two, you can say. Therefore Herald is mortal, right? That would be a deductive argument. Let's give an example of an inductive argument. Let's say I say, uh, the left handed people I know all the left handed people I know use left handed scissors.
So all left handed people use left handed scissors. OK, that's an inductive argument. I'm saying that out of everything I observe when it comes to left handed people. All the ones that I know use left handed scissors. OK, that's a premise. It's a statement, not my big conclusion, but it's my statement that I'm making. And I'm going to use that to back up the big conclusion, which is all left handed people use left handed scissors.
So you can see based on these two forms of deductive inductive reasoning, you can probably imagine the people are doing things like this all the time. If they're forming an argument right in the case of deductive, I'm taking a look at something that's kind of brought it first. You know all men are mortal, Herald is a man. It's kind of a contrived example, right?
But in terms of the structure, therefore Herald is mortal, so I'm taking something broad and then I'm narrowing it down and making some conclusion out of it right in the inductive case I'm saying, here is everything that I saw in life with respect to this particular topic. Right, left handed people. And therefore I generalize that out to something broader. So you go from a small sample and you make a generalization about a whole population. In the case of inductive.
So again, I don't want to get caught up, and when you're doing deductive and when you're doing inductive will talk a little bit more about that in a second section we look at the limitations of logic, but I really just want to look at the structure. You can see that you have one more statements at the beginning and then you lead to a conclusion. So that is the structure of an argument, whether it's deductive or inductive. OK, regardless of how you're doing it, if you're structuring what you say.
If you are backing up this grand. Statement that you are making. About politics about the climate. About chocolate cake. But whatever it is then, that's how you're structuring it. You're taking one more initial statements that are supposed to support. The big statement that you're making as a conclusion. OK, so that that is really the anchor that we should have when we approach conversations. If somebody says something you want to challenge them on it.
Or if you want to just go in making a statement, then think about the structure of it, right? You don't have to get into all the different aspects of logic and understand all the ends and outs, but think about how you're backing up what you're saying. OK, so we understand the structure of an argument. Got one of our premises and it has a conclusion. OK, so now we can kind of frame it now, how do you know if it's going well or not?
How do you know if if you or someone else is making a good argument or a bad argument? So we got some language around this, so I'll jump into that now. Now look it's going to get a little bit jargony, but again, don't get caught up on all the all the terms that I'm using 'cause I'll pull back from Later, and I'll give you the take home message. I'll give you what it is that I think you should remember going forward.
And it's it's a pared down version of everything I'm going to say in the next few minutes here, but I just want to give examples to kind of flesh out this kind of ecosystem of arguing so that you can hopefully recognize that these things do pop up in everyday life, and you can kind of build a conceptual understanding or an intuition around the different types of arguments that you run into and what might be good about them and what might not be good about them so. Let's go ahead and start with.
I mentioned earlier that we have deductive and inductive arguments. I'm going to branch out those and look at the good and bad versions of those. So deductive argument. Let's start there. That was going from the general down to the specific in deductive arguments that they can be valid or invalid. So valid as you can imagine, there's something good about it. An invalid. There's something wrong about it. Valid further branches out into sound an unsound.
So my first example is going to be a deductive argument that is valid and is sound, so it's kind of the best deductive argument you could possibly make. So here's my example. Uh, somebody makes a big claim. We might be debating arguing about something, and they say aspirin causes a change in a person's or psychology. Aspirin, the drug causes a change in a person's Physiology psychology, and I don't know why we're talking about this.
I don't know what you know, what the debates about, but let's just say they make that claim. So again, going back, we say that we want to kind of force ourselves to structure an argument. So somebody said that to me. Then I go back to them and say, well, what are your premises for whatever reason I care about what you're saying. I want you to back up your statement. So what? What is your argument? Put it into the structure where you have premises. Leading to this conclusion?
Back up what you're saying, so here's the full argument. They might say, OK, well, one all drugs cause a change in a person's or psychology. OK, two aspirin is a drug. OK, therefore aspirin causes a change in a person's Physiology psychology. So that is a valid sound argument. And the reason is because the premises. Well, first let's deal with the validity. It's a valid argument because if the premises are true, then it does make the conclusion true, right? If we accept.
The first 2 premises that all drugs cause a change in the persons Physiology, psychology and that aspirin is a drug. Then we should be accepting that aspirin causes a change in a person's or psychology, right? As long as this premises are true, it should make the conclusion true. So that's a valid argument regardless of anything else we would say that's a valid argument. It's a sound valid argument because we do accept those premises.
I mean, most of us would agree that all drugs do cause a change right in the person's failure. Psychology. Most of us would agree that aspirin is a drug, so we're accepting those premises. And the conclusion is true. OK, so that example is a valid argument, because if the premises are true, the conclusion is true. It's a sound valid argument because we agree with those premises. We believe those premises to be true. OK, I was I shouldn't say agree at this point.
I should just say those premises are true and therefore it does actually lead to that conclusion. OK, now you can also have a valid, unsound argument. So there's something good about it and something not good about it. OK, so here's an example. Somebody is going to make another big statement there like there is a cure for COVID-19 like today there is a cure for it. OK so can you put that into an argument for me? What are your premises back up? What you're saying? And they go OK.
Well one if a COVID-19 vaccine exists in there is a cure for COVID-19, right? If a vaccine exists in there is secure, OK, I'll accept that two. They say there is a COVID-19 vaccine. And so therefore there is a cure for COVID-19, so what's wrong here well? It should be pretty obvious. Over 19 is obviously the virus we're dealing with now in the pandemic, so the first premise that gave me, they said if a vaccine exists in their secure OK, I'll accept that.
I pretty sure we can think of a vaccine is secure or or something like that. It seems to help the situation. So let's say we accept that premise. But two, they said there is a COVID-19 vaccine. Well, no, I don't think that's a fact, right? I don't know. There's a vaccine, maybe you know something. I don't, but you can have me further evidence to support that one so. This is why so this argument as kind of dumb as it might sound, because you've got that weird second premise in there.
It still is valid, right? It's a valid argument, because if the premises are true. The conclusion would be true. If COVID-19 vaccine exists, there is secure, correct? There is a COVID-19 vaccine. If you accept that, let's say that was true. So maybe in two years from now, then you could say there is a cure for COVID-19. So that is a valid argument, even though the second premise, as far as we know, is false, right? So it's valid because it's structured correctly.
If those premises are true, then your conclusion is true, but it's unsound. Because you use the premise that isn't true. So and it's this kind of trickiness that I'm going to be talking about a bit later where somebody can structure an argument correctly, and it might even sound really good. But it could still be unsound. There could still be something about that premise that was false.
So when we get into facts and we get some of the fuzziness and we get into the limitations of logic, will talk about that in the second section. But again, right now just want to flush out those examples. OK, so we have. So what do we do? So far we took a look at deductive arguments and we said that there are valid sound ones where everything is kind of good. It's structured correctly and we believe the premises to be true, and then there's valid unsound where it's still structured correctly.
But one of the premises are false. So it doesn't quite workout and then you have the invalid. OK, so those are two examples of valid arguments. Now we have deductive invalid arguments which are always unsound, so here's an example of this. Uh, everyone in the NRA supports gun ownership. That's number one. #2 Joe supports gun ownership. Therefore Joe is in the NRA. OK, so somebody comes up to you, say Joes in the array. Oh yeah, well backed up. Well one everyone into supports gun ownership.
Yeah, and two Joes supports gun ownership. OK, therefore Joes in the array now you should be able to tell what's wrong with that right away, right? The big grand conclusion that Joe is in the NRA in the NRA is not really supported by the premises right? Because? And here's the trick, right? The first part or what could be tricky is that they are both true. Let's say they both premises are true. Everyone is supports gunship OK. True and Joe supports gun ownership.
OK, true, but that does not lead to the conclusion that Joe's there in the NRA. Your first premise dealt with some kind of broad category of people. Let's say people that support gun ownership, right? And then you said Joe is in that broad category of people because he owns a gun or whatever. He supports gun ownership, OK, but then you placed Joe. Into a into a subset that may or may not be true that Joe is in the NRA, right? Because not everybody who supports gun ownership is necessary.
Part of the Internet and so and so I just want to say right now that I'm not taking sides in any of these debates. So if any of these are triggering you right now or or you have strong opinions about them. Just to be clear, and I probably should set this up, beginning in this episode, I'm not taking sides on any of these debates. I'm not trying to promote anything. This is I want to use, you know, somewhat charged examples, somewhat ones that might kind of trigger people just because it's.
Relevant in its things that we here if I use nothing but the contrived examples like the you know, all dogs are mammals and blah blah blah. That's fine, you can kind of understand the concept of logic, but you can't relate to anything real. So that's the reason I'm using these examples, but again, not taking sides and regardless of how you may feel about some of these issues. Again, we're going to get into limits of logic in the next episode, so hopefully nobody's worked up at this point.
But let's do another example of this is going to be even more charged, but again, we hear things like this. Maybe they're saying that everyone who supports. Trump wears a Red Hat. Anjo wears a Red Hat. Therefore Joe supports Trump. Right, so we can see what's wrong with that, but you can also tell that you probably have heard arguments similar to this, right? Everyone who supports Trump has a Red Hat. OK, now that may or may not be true, right? That's actually not true.
But let's just say that was true. Then you say, well, well, Joe is wearing a Red Hat. OK. And therefore. He must support trump so we can see what's wrong with that. So even if those two premises were true at the beginning, you're taking this broad category and you're putting someone into a subset of that category, and that's not valid, right? That is not a valid argument, and so we say that is invalid. And it's also unsound. Now let's move on to the inductive arguments.
Now with inductive, instead of calling them valid and invalid, we call them strong and weak. And the reason is because this is as we'll see, a softer notion of what could be true and what might not be true. So let me give you examples and we'll talk about it so. Just as the valid argument has both the sound and unsound, this time when we do we do inductive reasoning. A strong argument can be Cogent or UN Cogent. OK, so let's do a strong, cogent argument.
This is the best thing you can get when dealing with inductive arguments, so example would be, let's say, most death penalties are given via correct convictions. Jim is on death row. Jim is probably guilty. Kay again, I don't have opinions about this. I'm not taking sides, I just want this to be relevant examples. You might get into a debate about something like this, so there's this grand statement at the end that Jim is probably guilty and they're like, OK, well, back that up.
And if the situation has to do with death row, it might say, well, one most death death penalties are given via correct convictions, as in the person really is guilty. So let's pretend that is a fact. I'm not saying it is. Let's say that's true, and then two we. Let's say we know that Jim is on death row. So let's say those two premises are indeed true. Then you're drawing a conclusion that Jim is probably guilty.
Now we would call that a strong, cogent argument, and it's the same kind of reasoning that we did for the deductive. Remember, when we said the valid sound argument was that one it was stitched together correctly as and if the premises are true, then the conclusion is true. And then we also said we do believe in the premises, right? The premises are true, so that's like the best situation you can get in a deductive argument. In this case same type of thing.
If we assume those two premises are true, that most death penalties are given. Via correct convictions and that Jim is on death row then it should make it very likely. That Jim is guilty. Right, if those premises are true, so it's structured correctly. If we also believe them to actually be true, then that's the strongest you can get. What's different about this compared to the deductive argument? The valid sound?
In this case, it's the strong Cogent was different is that we're dealing with probability because it's inductive. We're going from a specific example to a general example. OK, and when you do general examples, you're going into world that you haven't actually been in yet, right? So most death penalties are given via corrected convictions. Let's say that's true. Jim is on death row, that's true. But then you're generalizing out. You're saying? Well, Jim is probably guilty.
Well, that's still a big leap right now. How big that leap is depends on just how true the premises are that you're buying into, or how much evidence you can use to support those premises. And on like that. But it's the world of probability you can never get it. Exact OK, so now let's do an example of a strong UN Cogent argument. So it's inductive. It still strong, but there's something wrong with it. We call it an cogent. So here's my example.
There are, let's say, the majority of planets out there in the universe, or flat, and therefore the Earth is flat. Somebody tries to say to you may be there from the Flat Earth Society trying to convince you of something. They say the Earth is flat and I'm like, well, back it up and then like, well, the majority of planets in the universe or flat and therefore the earth is flat. Obviously pretty ridiculous, but that's considered a strong argument.
Someone tells you that the majority of flat planets out there in the universe or flat. The majority of planets out universal flat and therefore the Earth is flat. That's actually a strong argument. OK, because it still stitched together correctly. If the premise is true, then the Earth probably is flat, but it's on Cogent, right? Because we don't accept that the majority of planets in the universe, or flat at least most of us shouldn't. Some of us might, but we shouldn't.
It still in the realm of probability though, right? We're not saying the Earth is flat. Don't know if I said that earlier, but the conclusion is that the Earth is probably flat, right? And you're trying to back that up, but here they're just not backing it up very well at all, because the premise they used that most planets are flat sounds pretty ridiculous. I don't think most people are going to accept that, so that would be an example of a strong but UN Cogent argument.
And then our last example would be the weak UN Cogent. OK, so all week arguments run Cogent, so this is the worst you can do for inductive. So let's say the argument is this? OK? So first of all, there's a big claim being made that all vapers have diseased lung. So a vapor or someone who smokes e-cigarette. So let's say. Somebody's with you about you know whether or not you should get into E cigarettes or whatever, and they're like well.
Uh, you know, I don't really think you should, because all vapers have disease lungs and then you're like well, can you back that up? I mean structure that into an argument so they're like OK. Well one, there are 50 million vapors in the USOK 2, three vapors selected at random in this study were found to have disease lungs. OK, so see or sorry. So as a conclusion, probably all vapers have disease lungs. Uh, something's quite right there, right?
It's a week and Cogent argument, so let's say the first premises are true that there are 50 million papers in the US. OK, except that the two a study was done with. Three vapors were selected at random. Again, vapors are people, and they were found to have disease lungs. OK, except that. But that doesn't lead to the conclusion that probably all vapers have disease lungs, and that's because the premises are extremely weak, right?
It does not, it does not back up the argument very well at a 50 million people you took three and then you generalized out to the entire population of vapors, right from three people to 50 million people. So that is. A week on Cogent argument and it should be pretty obvious why that is. But you can see how messy it could get, which we'll talk about in the next section. OK, three vapor sounds ridiculous, but what if I took 2010 thousand?
How many vapors do there need to be to be representative of the big population that I'm talking about? And obviously this gets into statistics and the ability to take a representative sample, but that is not a cut and dry practice or approach, and so it can easily get messy. So let's do a quick recap of the words we've been using. Of the examples of the arguments that I've been giving, and as I promise, will pull back and I'll give you kind of the take home message.
But just to recap, with these examples, we dealt with validity, so when we dealt with when we talked about deductive arguments, we said there are valid and invalid if something is valid, then the argument takes the form that makes it impossible for the premises to be true and the conclusion nevertheless to be false.
In other words, If you structured your argument such that if the premises are true, then indeed the conclusion is true, then that's a valid argument, but the premises don't have to be true. The premises don't have to be true for it to be. A valid argument is just that if they are then it will lead to a guaranteed true conclusion. OK, soundness is where we break off from validity and say OK well you could have a valid argument, but it could be sound or unsound. If it's sound, it's one.
It's logically valid, meaning it's structured correctly. Based on what I just said. So the premises are true. And at least the conclusion, or if the premises are true, it will lead to that conclusion. So it's valid. And the premises are indeed true. That's what it means to be sound structure correctly and the premises are actually true. So that's validity that Soundness and we gave examples of when it sound when it's unsound, and then the inductive argument. We said there were strong and weak.
It's a softer version of whether or not something is true, because it's the land of probability you're getting into a world you haven't been to yet. You're generalizing out if we assume the premises are true and the conclusion is probably true, OK, then that's a strong argument, right? It's strong if as long as we assume those premises are true, then the conclusion probably is true, then that's a strong argument. It's a weak argument. If that's not the case, right?
That even even if we assume those premises to be true, it doesn't make the conclusion probably true strong. We broke out into Cogenton Cogent. It's a inductive, strong, cogent argument. If the argument is strong. We assume the premise is true. We could we get there for the conclusion probably is true and the premises are actually true and its own code into the premises are not actually true.
And then we give an example of a week inductive argument with the vapors where you could have true premises, 50 million vapors in the OS and three of them got selected. And we did this study OK, but then you reach the conclusion that just doesn't follow from that there wasn't enough evidence it wasn't strong enough. You really didn't back up what you were saying, it was a weak argument. OK, so as promised, I don't want you to get caught up in all the words in the strong the week.
The Cogent Young Cogent. Well, the invalid sound on sound in all this. I mean, maybe you can take that with you if you do enough of them. It definitely makes sense as an anchor, but you know, as I promised, I want you. I want to kind of just pair this back to a take home message and that is this. If we looked at all those examples and we were looking at here is when it's going well. And here's when it's not going well.
Whether it was deductive or inductive, there was a common thread among all that and it came down to whether or not you were accepting the premises. Right? It came down to somebody made a big conclusion. You ask them to structure it as an argument. So back it up. So have one or more premises and then show me how that leads to your big conclusion. Your big statement. And if it was going well, if we said OK, that really is a good argument, then it had to do with accepting.
The premise is that they were using right in the case of a deductive valid sound argument, not to use the words again, but. We were, you know, we gave examples where OK yeah you stitch it together correctly, but I also accept the premise is right. Like those really are true. So you have a good argument. It leads to the big statement or the big conclusion that you're making. And same thing with the inductive, you know it's a softer form because you're saying This is probably true, right?
But you still have to back it up with premises. And if we accepted those premises as most likely in always close to factors you can get then we would consider that to be a good argument. But if we didn't accept those premises, then the argument. Was not good so it really comes down to the premises. It comes down to the statements that people are going to make to back up the big claim or the big conclusion that they're making. And that really is the take. Home message.
Here is, you have the structure of the argument. You have the premises leading to conclusion, but if you or someone else is doing the argument well, then it's those premises that you believe to indeed be true. And this is what's going to kind of segue into the next section, because how do you know if the premises are actually true? And so this brings us back to the idea. To fax, right? I introduce fax at the beginning. I gave that definition.
You know, it's something that is taken as true, right? Or it's proven? Or the more casual definition would be a piece of information that you take as evidence for something. So this brings us back to fax, because what we do when we try to form premises as we probably try reaching into facts, right? That's usually how we. Try to structure. Uh, the backup of our argument. If I'm making a big statement, I want to back it up, then why don't I back it up with facts, right?
And then stitch that together into an argument. So one of the first examples I said was related to see 02. There could be some big statement that I'm making that says, you know, we released too much CO2 into the atmosphere. And then someone could say. OK, will back that up. You know structure an argument. Give me your premises and so I might say well 140 tonnes excuse me in 40 billion tons let's say of CO2 are released into the atmosphere worldwide. Every year.
40 billion tons and then two CO2 is known to cause global warning. OK so those can both be facts right? Perfectly good facts 40 billion times released in the atmosphere every year and see 02 does cause global warming. So then I could say so that backs backs up my big conclusion, which is we released too much CO2, but. Something is not. Sufficient here, right? And maybe you don't. Maybe you can't catch that.
I mean, if you just listen to, it doesn't sound like that bad of an argument, and we know. 40 billion tons are released in the atmosphere. OK, and the thing that you're releasing into the atmosphere is causing something bad. OK, therefore, we released too much of it into the atmosphere. Is that a good argument? Well? There's something missing here. Is 40 billion tons alot? Now you might say, well of course it's a lot. It's a huge number. Well, relative to what?
What percentage does 40 billion tons make out of all the gases in the atmosphere? Is that actually a lot of how much O2 is normally released? You know, by other processes, or what level is considered normal or how much would you need to cause damage? Just saying that 40 billion tons is released in the atmosphere and that the thing being released is bad. Is not really enough to therefore say we are definitely releasing too much to do in the atmosphere is not a good argument, right? It's not so.
You can see how you had to contextualize the 40 tons or the 40 billion times right? You had to put more context around that. Another example might be the food additive MSG, right mono, sodium glutamate. I mean you hear about this. Lots of people are. That's not a big deal. Other people say, yeah, it's some people say it's not a big deal and other people are saying it's really bad. Don't eat it. And so someone might be making a big claim that you really should not eat MSG.
And you say, Well, you should back that up and maybe one of their fax is that MSG has been shown to double the risk of cancer. Right MSG is shown to double the risk of cancer that seems to back up the argument that you shouldn't consume MSG. But if you spend a little more time with it, you're realizing, well, wait a second. What did the risk 22? Right, what was the original risk? 'cause if it was 0.0 one and he doubled it, does it 102? Well, wait, a second. Is that even a big deal?
I mean, that's almost sounds negligible. What was the original risk to begin with? What is the doubling? What did the doubling lead to? So you can imagine that these types of arguments are made all the time. If people are even making arguments and they will, you know they'll spit out these facts, right? The risk is doubling 40 billion tons of CO2 and their kind of trigger people. And they they lead to an emotional response. And they seem to back up the big claim that's being made.
But that's not necessarily the case, and so we're going to talk about the contextualization that's needed for fax. The uncertainty around the fact how in some sense there is no such thing as a fact in the purest. Sense of the word and how this is really what arguments come down to. It's all about those premises that back up the big conclusions or the statements that people are making and how you almost never can really take those at face value. And it kind of depends on the situation.
And we'll talk about that. So I just want to end off talking about Fallacy's. then we'll get into the second section. So fallacy, xar errors that occur in reasoning and their common enough to be given a name, right? So the common errors that people do in their reasoning, so it's not. It's not about. You know the sound of the sound of the validity so much. It's just these errors that people get caught up in and they're called Logical Fallacy's. I'm going to give some examples.
They're kind of red flags that you should be on the lookout for if you're not Privy to logical Fallacy's people can kind of do a number on you. They can kind of work circles around you easily and and be be speaking to you in a way that sounds like I'm making really good arguments, but they're not. And these policies occur all the time in the news, in the media, and the you know, they can. Even current scientific journals. I mean, people can. Get into these fallacy's.
might sound really convincing unless you know how to pick them out, so I'm going to give a few examples just to kind of cover this ecosystem of logic and I recommend you go study some of these. Fallacy yourself. In fact, I'm going to have a link in the description section of the podcast.
If I remember to put it there, it will pop open this kind of concept map that has basically all the logical fallacy is mapped out so they all the errors of reasoning that have a name to them and you can click on it and then it will give you a definition of that fallacy. But let's give a few examples now and see if you recognize them. So probably the most common one is called ad hominem. An Ad Hominem Fallacy is a personal attack on someone's rather than an attempt to address the actual issue.
OK. So you might be getting into a debate and argument, and then someone always says, Well, you haven't held a steady job since the year 2010, right? You haven't had a job since 2010. And they're trying to kind of use that. Two again, almost as a premise right there. Trying to use it to try to back up whatever their big argument is.
Whatever the big conclusion they're making, but that can't possibly weigh in on the validity of their argument or the strength of their argument, because regardless of what it is you not having had a steady job since 2000. That's an attack on your character, right? That shouldn't have anything to do with the validity or strength of the of the issue at hand. And so you can imagine how common this is an again you see this in the media all the time, and we probably, you know, you probably do it.
I probably do it. Sometimes we get caught maybe for getting really angry and we were getting kind of flustered, which probably means we're losing the debate. But if we start getting kind of flustered, we might. We might throw in an ad Hominem where we end up attacking. The person's may be in a subtle way, but it's not really. It's deflecting from the actual issue.
So, Well, you haven't had a job in 2010 or or you're the type of person who does this, or we saw you do this and this means this about you. So you're going after the character has nothing to do with the validity of the argument. With the strength, another example would be a straw man, so this is where you. Uh, where you basically misconstrue your opponents argument? You try to Reframe it.
You reword with the other person is supposedly saying so that you give yourself something easy to knock down. OK, OK, so let's do an example. Say you home for Thanksgiving. You're talking to politics. Talk in some issues with the parents and and you know somebody pipes up and Medicare ends up being the issue and you talk about Medicare for all. Everybody should have access to healthcare. Yeah, the idea.
You know the whole argument and then somebody else kind of gets flustered and pops up and says. You know it's not realistic like millions of Americans off their health care plan, right? That's what you're going to do. That this is going to lead to Medicare for all is going to have millions of Americans taken off their health care plan. OK. Now, if you just heard that you might be like Oh well, actually yeah, that's bad, right?
Millions of Americans are going to lose their current health care plan. That's bad, OK, but it's like, well, wait a second. Is is that a proper re framing of the argument or is that totally misconstruing what's being said right? Those in favor of Medicare for all, or really talking about replacing it with a single payer option, right? So there is maybe this kind of momentary?
Piece where Americans often lose their current health care plan, but then it gets replaced with a single pair option, so it kind of comes back in. Now again, I'm not taking sides here. I'm not saying, You know we should be for Medicare, for or against it, but this would be in a common example of a straw man argument. In fact, it was a specific example from politics, and I won't mention names, but somebody did scream out something like that.
150 million Americans can be taking off their health care, and that's what you're saying. But the person is favorite. Medicare wasn't really saying that, so that's an example where you're trying to reword someone's argument in a way that it makes it easier. Easier for you to knock it down, right? Because if I have to pick apart Medicare itself and talk about the replacement of current options with single payer options, then I really have to know Medicare.
I really have to know how it works and explicitly go after you know what I think is wrong with it. But if I just reword it as you're going to take millions of Americans off, the current health care that's emotionally triggering, that's easier for me to knock down. That's easier for me to kind of get support from a lot of people because they're going to be emotionally triggered by that. So anyways, we know all kinds of examples, like that's called the straw man argument.
You're misconstruing the opponents argument in a way that it. Easier for you to go ahead and knock it down, and so there are all kinds of logical fallacy. Is these errors in reasoning that people get too, and we do it ourselves. And we hear other people do it, but there's so many that you kind of have to. Spend a bit of time maybe studying them so that you you know when they pop out. Although I would argue a lot of us.
Kind of know that anyway, for the main ones like there's only there's a bit of an intuition around it, like the straw man. As soon as somebody screams out something like that, you can probably tell. Wait a second. You've rephrased argument. That's not quite right. If somebody attacks someone's right, it's somewhat obvious, but other ones are little more subtle, so it's good to build some intuition around the Fallacy's.
Some more common ones are appeals to ignorance, false dichotomy, slippery slope, fallacy's circular argument, hasty generalizations, red herrings. Probably sounds familiar there's. Causal fallacy appeals to authority and on and on you know the appeal to authority comes up all the time. Anytime somebody you might be arguing about something and they say well so and so professor of Harvard said soon as they say that right? That's an appeal to authority and whatever it is, you're arguing.
Arguing, OK, unless it's directly related to somebody being a professor at Harvard, their position has nothing to do with the validity of the argument. You can't call appan someones authority like that. The argument should stand on its own, its validity or its strength. Should stand on its own based on the way the argument has been structured and how well the premises are supporting the conclusion, so that appeal to authority comes up all the time. So I'm going to leave it at that.
I would recommend go check out that link that I gave you in the description. Go look at some of those policies. Go Google yourself building intuition on how to catch those errors in reasoning will help you argue better. It'll help you call out other people who are trying to kind of consciously or not, basically work around you and as opposed to dealing with the real issue.
OK, so that was the overview of logic itself, and more specifically the structure of an argument, right premises leading to conclusions when it's going well. When it's going bad and we took a, you know we kind of broke that out and all the different ways that can happen. And then we took a look at a few fallacy's the end and so that's logic and hopefully I was able to give you the impression of its power.
I mean, we just did a few examples, but you can see just by structuring a conversation or specifically an argument around this structure of premises leading to conclusions. Then it gives us a way to assess whether it's going well or not, and it's something you can get better out with time as you start to think. OK, well, you know some of these guys making a big conclusion.
I don't hear any premises so I'm going to ask him to back it up and then he tries to back it up with the premises and maybe you can challenge those premises. Maybe you don't think those are really fax or or maybe you realize they are facts and go OK. I see what you're saying. Yeah that is a good argument, but it's good to just put that on people. And again, it's not about you got to flesh out all the aspects of deductive and inductive reasoning and pick apart the soundness and.
You know you just want a bit of intuition around around the structure. I mean, really, what it is is just saying give me. Your premise is that back up what you're saying and then I can reason about it. I can reason with you. We can try to compare what you're saying to what I'm saying. Maybe someone has stronger premises and therefore stronger conclusion.
Or maybe there's some fallacy's there that the other person is making that you might even be making and so you can try to strengthen your argument by trying to Dodge those fallacy's so. Logic has a purpose. OK, it has a real power to it. It's a good way to frame your conversations to frame your debates so that you are supporting the kinds of grand statements we make all the time. OK, so hopefully I've given you that impression and what I want to do now is go into the limitations.
So we talked about the power of logic, why it's there, why it's a worthy area of study, why it's good to have an intuition around it, why it's good to bring to your conversations, but. Is not the full picture now you might think it is. You know nobody would blame you in a way for thinking. Well, logic seems to have. Everything covered right?
I mean if somebody is making a grand statement then you expect them to back it up, and if they do, back it up, then they're going to have these premises leading to conclusion. Then we can frame it, whether it's deductive or inductive. We can take a look at it if it's valid or invalid if it sound or unsound if it's strong or weak if it's Cogent or uncoded, you know we have these kind of rules to pick apart. Really any argument and say whether it's good or not. And it's true.
We do have that, and we have the Fallacy's top of that. And again, go look at that map that I put in that link. You know, there's many, many Fallacy's, so if you really get to know the rules of logic and you really get to know the different policies, are these errors of reasoning the coverage? Is. Is seemingly complete, meaning it doesn't seem to matter what anyone says. We can apply the rules of logic. We could all be Vulcans, right?
If you know the Vulcans from Star Trek, we can have a whole Society of logic mean. Why would that not be the dream? So that anything that ever gets said in any time could be picked apart with these rules, even even just the simple ones that we covered. Uh, in today's episode you know the.
We could understand things as a structure from premises moving to conclusion and we can see whether it's good or whether it's bad if we compare them and we can look for errors in reasoning and we can stack them up against each other. Whenever there's a debate, and then there would be a winner. And why don't we? And that's not just casual debates with your parents at Thanksgiving, or your friend or your people on social media. I mean, why don't we bring that into the government?
Why don't we decide on policies through logic and and form guidelines? You know, in science and other areas of academia and outside of academia. You know social programs, why don't we just base society and everything we do in it on logic? It's really, really tempting. It's really really tempting because at first blush it seems like logic kind of has it covered, but.
That's not quite right, and I think we all know that's not quite right and we might not know why that's not quite right, but that's the part I'm going to get into now in the episode is what are the limitations of logic? Where does it hit a wall? And it seems like if we follow those rules were good to go, but. We don't see logic play out that will in reality do we. Now we could probably all be better at it. We can anchor conversations around it.
I think that would be a good baseline, but we know it hits this wall. It doesn't really seem to be the technique of the approach that we ultimately rely on to resolve issues. We wanted to play a role, but it's not the full picture. So Why is that so? That's what I want to do now in the second part of this episode is to is to take a look at why logic runs into all awhile, why it's an incomplete picture, what it's limitations are, where those limitations come from.
So that in the third part when we look at real debates, you know a few examples of some real arguments. We can take both sides. We can look at the power logic. We can look at its limitations and see how those kind of butt heads and then in the 4th part when we look at the resolution, everything. We can we can use that to see how to meet in the middle. We don't want to toss logico.
We don't want to only rely on, you know emotion and context, but we don't want to completely invest in logic either, because we don't think that's realistic. So let's take a look at its limitations now. So I'm going to go back to the definition of fact. I brought that up at the beginning of the episode. We had that kind of dictionary definition, a thing that is Nona proved to be true. Then I gave a casual definition, a piece of information used as evidence.
I promise I was going to unpack that word evidence throughout the episode. I'm going to do that a lot in this section, 'cause it's really, really critical to understand what is wrong with the definition of evidence or how it gets used incorrectly, or why it's not as strong as people tend to think. So will do that throughout this episode. But let's look at that definition of fact. I said.
It's a thing that is known or proved to be true, so it's 2 words in That are problematic, known improved because it suggests that you really can in totality know something, or that you can prove something. Now, if we take, you know more kind of axiomatic system where a rule based system like logic or mathematics, then you could say OK, you can prove something right. We have mathematical proofs. That's something that mathematicians do all the time.
But that's a very constricted restrained, rule based world in the real world, like outside academia, outside is kind of sterile exact environments. Things are much messier. There's way too many interacting pieces you cannot totally know. Approve something. There's always going to be a level of uncertainty, and it's that. Always existing level of uncertainty that I'm going to be talking about in this part of the episode.
It's really critical to understand how fundamental that so-called Opacity is that you cannot see past a certain point. There's always a level of uncertainty, so in real world situations, nontrivial truly complex situations, there's going to be a fundamental wall that you can't pass of. Uncertainty of unknown NIS, so you can't say that you totally know something.
You cannot say that you prove something, and so in that sense, there's kind of no such thing as a fact in the real world, the way that it's typically defined. The fact is not something that you can just take it face value and not question it. There's no such thing as that and you can even look at science, science. You might think, well, there's definitely such things as scientific proof. Isn't there? No, no. It's not like mathematics.
There is no such thing as a scientific proof you hear about it, and people in the general public might say, well, it's been scientifically proven. That, or you might even read about that in the media. But that's not the case.
There is no such thing as a scientific proof, and we know that because the leading philosophy of science is related to something called pauperism will talk about power in a bit, but It has to do with falsifiability, which means any theory that you put forward in science has to be have the ability to be falsified has to have the capacity to be falsified. What that means is you have to be able to refute it. OK, you have to be able to challenge it. It can't be exact. It can't be complete.
You can put, you can put forward whatever you want, but we have to be able to attempt to refute the theory. It has to be falsifiable, it has to have the ability to be shown to be false. OK, you cannot completely accept it, so that's why there's no such thing as scientific proof. OK, that doesn't exist so we can see that in science and in anything to do with the real world. It's this idea of a fact being something that is known. Approved is just simply not the case.
OK, so it's important that we understand that fundamental uncertainty around, AFAIK. So let's start unpacking the word evidence now. So we said, you know, the more casual definition of fact is a piece of information used as evidence, and so we know how this already folds into the framework of logic. Because I talked about that in the first section, right?
We want to structure conversation around logic, or sorry as an argument, and we want to support that argument if it's going better than presumably where. We're calling upon facts as premises, and we're doing a better job of that, so we accept those premises. We accept them to be true, and so this is the idea of evidence, right? If I collect evidence, if I if I showcase evidence for my argument to back it up, then presumably I'm doing a better job. So let's unpack that word.
What is the definition of evidence? Is anything presented in support of an assertion? OK. Evidence things are undoubted might also be said OK, but now from my last conversation there around facts, we know that that's just not a good definition of evidence. They are doubted. They have to be doubted. They have to be challenges. No such thing as it truly certain fact or piece of evidence. But it is something that is presented in support of an assertion.
OK, we know that we saw that that's how premises and fax get you. So just to be clear, I'm connecting the idea of a fact to the idea of a premise that's used to support your argument. To this word, evidence. OK, so so. Fact, premise evidence. Basically equate those, because that's how they're being used in an argument. So evidence obviously gets used in science all the time.
We call that empirical evidence, and that's kind of the evidence that we think about when we want to back up our argument were kind of thinking about it in terms of empirical evidence, because empirical evidence is an observation or an experimental results or something. If somebody did a study somewhere, did some experiment somewhere, there's this observation that was made, and that's my evidence that I'm going to use to support my argument. OK, so empirical evidence of the observations.
Experimental results that can be used to support you know whatever narrative is going into into the presentation of science, right? Scientific Journal or harvesting presented, so we're going to impact that a little bit more in a bit, but I just want to say at this point that facts cannot be taken at face value. They are not these things that you just accept, not in the real world. No such thing as a proof in the real world, not even in science.
There's always this uncertainty is always a level of fuzziness. And so we cannot take that at face value. And so it really comes down to kind of what I consider proximity. what I mean by that is how close is the fact to react to reality? What is the distance between the fact and reality? And that's kind of what you have to consider when somebody presents you with facts and say here on my fax. Here's why I'm backing up what I'm saying. Well, you should kind of question.
OK well, how close are those to reality and How do I assess that? And so this brings us into the realm of complexity. This idea of complexity and I've talked about this in other episodes. Obviously it's an overarching theme of my. Podcast in general, nontrivial.
The idea that simple systems are very different than complex systems, and when you deal with complex systems or situations, you have to take into account the properties of those complex situations of those systems because they are fundamentally different than that of simple. So just just really quickly, I was giving some examples in the first episode we did Ant colonies to termite mounds. We did, you know the Starling birds?
The idea that you can have a lot of simple individuals come together, but then it leads to some kind of complex system and the properties of that big system are now fundamentally different. So you have to understand. That difference, So what does that have to do with the proximity of a fact to reality or the ability to kind of understand when you should question in fact more than otherwise? So let me give you an example. Actually get under 2 examples. The first one is the temperature of ice.
OK, so let's say it's a glacier. It's a hockey rink or whatever you're getting into some debate. Maybe it's about climate change, but hockey I don't know and somebody presented with a fact about the temperature of ice for whatever reason. So would you accept it? Somebody just says the temperature voices in this particular situation is minus 6 degrees or something like that. A little bit below 0. Is what it is. So do you accept that fact? You don't have to.
You can challenge it, but are you likely to accept it? I would argue probably because it's the temperature. We know how measurement works. We have monitors work right if it's a Mercury based, one goes up when it's hot, it goes down. It's cold. Maybe it's an electronic version, but we've been measuring temperature as a civilization for a long time. We know how it works is probably not a lot of uncertainty there. Now there could be reasons why the measurement messed up, but generally speaking.
It's probably something you would take at face value, or if you did question that you probably wouldn't dig too deep, right? So that's a fact that probably has pretty close proximity to reality. You're not necessarily going to question it now. What about something like voting right in politics? Maybe it's the presidential election or something, so you get a result back. It sounds maybe just as clean, just like the temperature gives you a number. The vote gives you a number now.
Who won win loss, right? So the results come back into your question it, well, you might question that more. I think most of us would probably look into it or they want to make sure that there's. You know multiple counts by different groups to make sure the numbers are right and why do we do that? Well, it's because we know it's a more complex situation, right?
Measuring the temperature of ice, not to, not to say that there's no uncertainty around that is unlikely to have as much uncertainty, all else being equal, as something like voting right because voting is going to have different pieces. Maybe some of its manual, maybe some of it's online electronic. You know there's different systems involved. Maybe they're doing a Mail in thing.
I mean, whatever it is going to be more pieces interacting, leading to more complexity, and so it means there's a greater chance for things to go wrong. It doesn't mean that definitely is something going wrong, but there is a greater chance. Something to go wrong. So you are justified in questioning the results of something like, you know I vote right in politics compared to say the temperature of ice and so this is just kind of too simple examples of the difference in complexity.
The first one, the temperature ice, is relatively simple compared to something like an election. Something like voting. We have so many people involved and things going on. So complexity is leading to more uncertainty which gives you kind of the right to challenge the fact or at least appreciate that there is more uncertainty around. Something like a fact, some statement that's being made that you're supposed to otherwise take at face value.
So taking a fact in isolation is arguably more dangerous than more complexities, right? Just because we know there must be more uncertainty in complex situations, so this kind of gets into the contextualization that I alluded to earlier. The contextualization that's needed to put facts into perspective. Let's say it's a presidential approval rating, and someone says, well, the approval ratings only 26% or something, and it sounds really low, and so they're going to use that as a fact.
It is a fact that maybe today or whenever time period talking about the presidential approval rating was this number. It's a fact, and it's true. And so you might. Then you might argue that something about the present bad, the president being good, or whatever the argument is, but. You want to put some context around that approval rating because those approval ratings change all the time and they depend on what's going on.
I mean, it might have been, you know, it was a really low number because something was going on in the world. Or maybe it was really high number because something was going on in the world. Maybe their approval rating was really, really high. It was right after 911 and so you know the current president at the time was was handling the situation really well.
Or maybe they were handling the situation well or whatever it is you want to put some context around that proval rating because they change all the time. So as an isolated fact he presidential approval rating is not just. Something to take in isolation, right? It's one piece of information, but you want to contextualize it. It could be that approval ratings for one president had a very narrow range for another president, that a very wide range is. That is that interesting to know, right?
Even if maybe the averages were the same, right? Maybe they have the same average presidential rating, but the spread for one president way larger than the other. Is that relevant? So you want to contextualize things to put them into perspective? We did that 40 billion tons per year of CO2 earlier. You know? Same thing if you just take the tons of CO2 as an isolated fact. That is true, right? 40 billion tons of CO2, but you want to contextualize that well, is that. Is that a lot right?
Sounds like a big number, but is that a lot? What percentage of the of atmospheric gases does that represent? How much is needed for to be dangerous? You gotta contextualize that we had that MSG example, with the risk doubling and say, Well, the risk of cancer double. So it must be bad. Well, what was the first risk to begin with? Was the original risk doubling risk is not necessarily always bad. So again, isolated facts have some level of uncertainty to it. They need to be contextualized.
And here's the point. These issues are exasperated with complexity. OK, so we go back to kind of, you know, like what is the gauge? I mean, how do you kind of listen to somebody's and then say, Well, you know, should this be challenged? How close to reality it is, even though you're not going to get an exact answer, you should think about the relative complexity of the situation, the temperature of ice is something that you're probably going to accept.
You could still challenge it, but it doesn't have a whole lot of uncertainty about how that was gathered and why you might trust the number. But something like voting. You know our presidential approval ratings, or atmosphere and climate, sorry. The risk of something with respect to the biochemistry, body or whatever it is we know those are much more complex than just taking the temperature of something, so they have a right to be questioned.
There's a good chance that whatever fact in isolation is you can easily be taking out of context, right? So the story there is that the complexity of the situation is going to exasperate the uncertainty is going to make it more difficult to know whether not you should try something as a fact. So you need some kind of approach to deal with the increased complexity with respect to whether not you accept the fact.
So we're going to talk about what that might look like look like in a bit, but the point is is that complexity is definitely exasperating. The unknown is the uncertainty of the situation. So we cannot say that there is such thing is approved. We cannot just accept facts at face value, particularly when the situations are complex. You're talking about a social issue. You cannot just say you trust the facts, it's a meaningless statement. Social issues are highly complex.
The facts that are being given they might be true. They might not. It's probably something in between. They need to be contextualized. They need to be challenged and need to decide whether not you're going to accept that or not so it's exasperated with complexity. Now that's just the facts themselves, in isolation, but what about the narrative that we piece together when we really get into? Debating in structuring things arguments. What we're doing is painting a narrative right.
We're taking isolated facts, and were connecting them. And then we're interpreting it were telling a story with it, and so this gets into interpretation. You know the stitching together of facts that telling a story and the reality is, you could tell a complete lie with nothing but facts. OK, so this is the difference between facts in isolation and a narrative. The facts and isolation could have some level of truth to them, right?
And you can question it more given complexity, but you can decide where that is. And that's a worthwhile step. But then there's the piecing together of those isolated facts into the story, and that's the story. Or the narrative gets told that people buy into that people that we hear about in the media that maybe they used to create groups of people to kind of put strength around it. Whatever it is, so how the facts get pieced together into a narrative is really where it gets interesting.
And really, when we talk about complexity, exasperating the problem, exasperating the unknown S or the uncertainty, you really see that. In the narrative themselves, because now it's not just the uncertainty, the facts, it's how you are interpreting them. How you stitch them together. OK, so let's give an example on the extreme end will be tell a full on LIE.
With facts OK, and I'm not saying that's the normal thing people do, but but just to show that if you can tell a complete lie with facts, then saying that you believe in facts is pretty meaningless statement or or. Just saying what your narrative is an expecting people to accept. That face value is just nonsense. OK, so let me give you some examples.
In the first example, let's say you come home, you're younger and you walk in the house in your mother says to you, did you do your homework and maybe you can have proclaim back with a bit of a motion. I wrote a full essay on John Adams and his influence on the economy. OK, I did that and she kind of backs offices. OK, OK, sorry. Now in that situation. I just had to go to Best Buy on John Adams, right? But maybe I did that two years ago, right?
That might not be related to what she was really asking that there's a term for that's called Paltering Paltering. When you paultre, you are trying to Dodge the original question by giving a fact so I didn't lie in a way right because I told the truth, she asked me something and I just gave her a fact. Maybe I really did write an essay two years ago about John Adams and his influence on the economy. And so that's something that I really did. That is a fact that is true.
But it's completely out of context with respect to her question, so it was a way of dodging her question. So in a sense, I used a fact to tell a lie. I used truth to tell a lie that's called Paltering. In this example. OK, I'm gonna ask you a question. You state a fact you say something that's true that you can back up, but it was done to Dodge the original question. So you can imagine this happens in politics all the time, right?
Happen to gotiations happens in presidential debates and things like that. So let's give an example that you might be familiar with. US vice presidential debate between, say, the Democrat Tim Kaine, Republican Mike Pence, back in the day, and Kane was pushing Donald Trump at the time. Regarding his, you know, the releasing of his tax returns. This probably sounds familiar in trouble saying that I'll do it once the IRS completes the ongoing audit right?
I'm not quoting it, but it was a paraphrase, something like that. He was going to do it once the IRS completed the ongoing audit and they decided that, well, Richard Nixon release tax returns when he was under audit, right? Nixon did this when he was under audit. And so that leaves the impression that Nixon Republican did so while running for reelection, right? Creating a precedent for Trump. So so if you think about that, Canyon seems to kind of him, right?
You're pressing President Trump's releases tax returns during an audit. And Kaden said, well, you know, Nicks into that, right? So what's the problem? But New York Times ended up pointing out the flaw in this. If you look at the facts right, if you dig deeper into the truth, you put some context around it. You realize that Nixon releases tax while under audit. Yes, that's true, but he didn't do it until after the 72 reelection until after his 72 reelection. So there's a difference, right?
You're? Pushing Trump releases tax returns and Trump is said. Yeah, I'll do it after the audit. And then you're saying, Yeah, but yeah, Nixon did during his audit. And if you just leave it at that kind of, sounds like Kane has him right, but it's out of context. The fact is, yes, Nixon did do it during the audit, but he did after his 72 reelection. So there's a difference in the situations. OK, there's a different, so that's another example of faltering. Kind of a political example.
Another one might be. This one kind of going the other way around. Maybe you're pressing Donald Trump on the during a presidential debate. You're questioning about housing discrimination lawsuit. This was something that happened back in the day early on his career, and then trump might said, you know that his company was given no admission of guilt. There was no admission of guilt, and that's his answer.
So you're pressing him on what about housing discrimination lawsuit and said look, we had no admission of guilt. It's true, that's a fact. They did not have an admission of guilt, but if you take a little bit further, you find that, well, the company did apparently discriminate on race. There was something about discriminating on race, so it's a way of dodging the original question. You answer the question with the fact you are right.
You are answering the question with the fact although, but you're not really answering the question right. It's not a true answer. The question, because maybe that wasn't the point, and so that's called paltering. So that's an example of using truth. To tell a lie, and so it doesn't even matter how true this statement is, it's not even about the unknown. This or the uncertainty around you know the fact itself, because it could be purely true.
It could be something that everybody agrees on an ego Fact Check it. And yes, the statement made was absolutely true, but it was done in a way to Dodge the original question, right? So you're getting into this fuzzy uncertainty, not so much, because the fact itself is uncertain, but because of the way it was used. OK, so that gets us a little closer to this idea that it's the way you stitch together.
Facts themselves, regardless of how true they might be in another themselves, in a way that you can actually tell a lion. Let's do an example with a full on narrative. Think about conspiracy theories so you know three common examples would be the Flat Earth Conspiracy 911 being an inside job conspiracy, JFK shooting conspiracies and things like this. Now, yeah, listeners can choose what they want to be, and I'm not going to say what's true and what's not.
The point is, is we could use a conspiracy as an example of something that might totally be false, right? So flatter with most of us would agree is probably. False, but maybe you do agree with that? That's fine, that's up to you. The point is, is you can use nothing but facts to piece together. Pretty convincing story and we know it's convincing because many people buy into it, right?
So if you take something like the Flat Earth Society and people that buy into the narrative with the earth is actually flat. As ridiculous as that might sound. You know thousands and thousands of people buy into that, right? Well, why do that is? Is it just because they're idiots, or is it because people have been able to construct fairly realistic sounding narratives using nothing but facts? Well, yeah, sure.
I mean, if you go you watch this documentary on Netflix right now, but flatter society. There's YouTube videos you could look at his articles, you can write, you can. Going to entertain yourself with seeing how they actually put together arguments, and you know what they are using a lot of facts to try to do it right. They'll say something like, well when you go up and then you know really high in the air the horizon doesn't disappear when you do this with this particular instrument.
This happens in the idea that they might use nothing but a bunch of facts and piece it together in a way that tells a story about how the Earth is flat, and you can kind of get to a point, even though you probably don't believe it yourself where you're like. Oh, I could see why a lot of people might start to buy into this if you didn't know if you were looking for a conspiracy theory to buy into, or if you were.
For some reason not Privy to, I don't know a lot of the photographs we see, your birth and things like that. Maybe there are quite a few people that buy into this, and there are, so there's something convincing about the narrative that's being told, even if it seems totally ridiculous, right? And they're doing it, and the reason they can convince so many people is because they're using nothing but facts, at least some of them to piece together their story.
So the facts and isolation are not really what's critical. You can't just say I believe in the facts because you know what people that believe in Flat Earth can say that too. They can. They can use nothing but facts to put together their story, but it's the story is the weights piece together that might be completely ridiculous. Again, depending on which side you agree on. You know the 9/11 being an inside job is another example.
I mean you could do nothing but look at the destruction site and the angle of the planes coming in and Yadda Yadda. And you could be like well. You know I can piece together a story that shows, I think this was an inside job. Again, you can choose which side you want to believe, but the point is, you could use nothing but facts to piece together seemingly insane stories, right? JFK shooting? You know who shot JFK was? It was kind of rigged.
Was it from outside country or is it just a crazy guy up in the in the tower? Whatever, right, you can use nothing but facts, angle of shots, timing of this, what happened before and after and you can piece together and he kind of narrative you want. Now that's that Paltering. And then the. 3 examples of the conspiracy I just gave you.
The reason I'm doing that is because by showing you the extreme where you can actually tell complete lies using nothing but true facts, I mean facts that are accepted right? Uh, you know. Again, the facts in isolation themselves have uncertainty, but if you accept all the facts, you can still tell a complete lie with them just based on the way that they are stitched together.
And so going back to that complexity, exacerbating the knowing this with uncertainty, it happens in the facts themselves if it happens in the way facts are stitched together. Even if you do accept the facts right when we create narratives, and the reason is that complexity is always going to have more possible explanations, always, always, always, always more possible. It doesn't mean more complex things definitely have more explanations, but they definitely have more possible explanations.
And here is really the point. The more complex the situation. There's going to be a wall, uh, kind of opacity that you can't see past to know exactly why or how something is. There will always exist more possible explanations for the things you observe in complex situations, and so this this creates exacerbation around the uncertainty or the annoyingness about the situations we look like. It's why we can't just accept things like facts.
It's why we can't just accept the narrative that created with those facts, even if you accept the facts themselves. OK, and so it's important to realize that social situations, situations I have many contributing factors that go into it that interact in complex ways and produce whatever it is we observe. Observe at the high level, you know the voting, some social programs, some opinion that that is forming in a group in society or whatever. It is right?
Things that we've tried to form policies around and guidelines around, you know, quite frankly, the things that we get into debates about. Right, we don't get into debates about the really simple things. You know, not usually. You might for fun. Like if I jump off a 25 story building. Am I going to die? Uh, probably how many people get that? I mean, some people might just for fun, but those aren't really what we're talking about, right? Those are easy to agree on the facts.
Well, we agree on gravity. We agree on, you know that's probably a height you die. But the situations that we tend to debate about are much more complex than that, right? It's involving people's and backgrounds and cultures and jobs and attempts to be successful. And and all these kind of interact in complex ways. And so politics is obviously, you could example. Why do people debate politics so much? Well, because it has all kinds of uncertainty around it.
We can't, you know, have the kind of resolution that narrow something down to a single root cause complex things. Non trivial things don't have singular. Root causes great, give it up. It's not going to happen. It's never going to happen because there's this fundamental opacity. This wall that exists in complex situations where you can't resolve it down to a single root cause OK, and so the facts cannot be taken at face value.
The narrative that we stitch together using those facts cannot be taken at face value. Can't just buy into something. And go for it, and so This is why structuring around. Logics things around the structure of an argument is critical, but why it's also fundamentally limited. Can we can't just use the rules of logic and weigh the premises and decide what's better and go from there. There's this fundamental wall. In complex situations, to how much you can know.
OK, so let's just give a really quick recap. So we talked about the structure of logic in part one and we said how its premises leading to conclusion an really comes down to how much you accept those premises and so that gets us into the realm of facts and evidence, right? Because you're going to try to put your premises around facts and evidence and then use that to support your big grand conclusion, right?
So the structure of the argument is being backed by the evidence, but evidence itself has all kinds of Gray area. It's not something you can take at face value. The facts of the evidence you're using has to be contextualized. The narratives you construct with facts have Gray area to them as a distance between the narrative and reality that increases under complexity so. There is a fundamental truth that we must accept. And there's a name for that, and we call it epistemic uncertainty.
Epistemic uncertainty. I used a pistol myologie earlier about, you know, the basically the science of knowledge, or how do you know what you know, right? The study of knowledge. How do you know what you know? Is epistemology? So we say something like epistemic uncertainty were saying that there is an uncertainty around. What you can accept as truth? There's no. There there might be an ultimate fundamental truth, but you don't have access to it.
You don't have complete access to the fundamental truth. You are always glancing reality at an angle. You're getting some slice of reality. And and really, when we get into these debates and we get into these conversations were trying to ascertain how much of a slice we actually have. How close are we to the truth? But it's important to understand that there's this fundamental wall that you can't pass of epistemic uncertainty.
You can try to get close, but you're never going to be able to have that resolution. Power to focus down to a singular root cause, and therefore know exactly you know what facts are correct and exactly what the proper narrative is to stitch together. There's always an epistemic. Uncertainty, so if we think about the two main types of arguments or we had deduction and induction, we can say that in some sense deduction doesn't really exist in reality.
what I mean by that is a true deductive argument was when we know we will from the general to specific. But we had what we're saying there is that if the premises are true, then the conclusion is definitely true. So the deductive argument had some kind of definiteness to it, right? Some exact certainty, it's kind of framed as a proof, right? Whereas the inductive argument wasn't really about a proof, it was.
Taking into account it's kind of baked into the design of an inductive argument that you take into account. There is this epidemic uncertainty under premises and so that when you generalize out, you're doing in terms of probability but with deductive. When you go to general to specific are saying well, as long as those premises are really true, then the conclusion is definitely true.
Well in other outside you know these kind of Axiomatic Worlds of maybe math and logic itself, things that are very well defined, closed off in reality and again exacerbated by complexity. This idea of a deductive argument.
As defined, doesn't really exist now, you might try to narrow something down from general to specific, but you're never going to a point where you fully should accept those premises is true because you always have that episode concerning, so it all kind of comes down to probability. It's all it all has to be softened by the strength of the weakness of something. In some sense you can't really say you know in complex reality will definitely invalid argument or is definitely valid.
I mean you could look at the structure and say that but. For sound is an unsoundness. I guess more specifically where you really say OK, well, as long as those promises are, we have this epistemic uncertainty. We have all this kind of fuzziness around it, so in some sense that doesn't exist. And So what I want to say here is that. Induction, I would argue, is kind of more of what we're doing every day, right?
It's much more realistic, at least it's turned in terms of taking into account the epidemic concert near on the premises you use, things are fuzzy, things have to be thought about in terms of probability, so induction is more natural. This is what we find ourselves doing a lot of time because we're taking our slice of reality. And there were generalizing out.
We're making these grand claims about things you know we're going home, talk politics with parents or going on social media writing blog posts. In some sense, we kind of think we know it all right, because we have our slice of reality. We observe things. We notice patterns in life, and then we think we get it. We think we think you know how it works, and so we inductively kind of generalize that would make these kind of grand statements about how the world works and that forms our worldview.
And so now I want to talk about something called the problem of induction. This is a big area in philosophy. It has been for a long time. Many major philosophers have waited on this issue, and the issue is just what we've been talking about. You have this epistemic uncertainty where you can't exactly prove something when you go from the specific to the general, and this is again what we do all the time, or we take our experience.
Then we generalize that we make this grand statements, but as soon as you make some big statement outside your experience. It's just that it's outside your experience. You can't know that. That is definitely the way it goes. Your theory or model is never going to be complete. Your worldview cannot be exactly that. There is no proof of it, right? There's always that epistemic uncertainty.
So this is kind of formally called the problem of induction, and it was first dealt with by a man named David Human. You know, he was trying to understand what grounds we come into our beliefs about the UN, observed, right? How do we come to our beliefs based on the UN observed? And we do this all the time, but you can't ever know it completely. There is no way to totally prove whether not it's true. So why do we even believe? What we believe?
So we're not going to totally pick that apart, but I do want to give some examples related to the problem of induction that I think are really important and paint the picture of what it is and how it works and how fundamental it is to our lack of understanding about what we think we know. So the main example I want to give the problem of induction is something called the Turkey problem. You may have heard this before I think was originally used by Bertrand Russell.
I think he used a check in though, and then other people used turkeys to outline the problem. It's been popularized by people like Naseem Taleb and individuals like that so. Let's go over the Turkey problem first to outline what the problem of induction is. So imagine you are a Turkey and you are getting fed every day, right? You're getting fed and life seems pretty good, so you get fed and maybe getting fatter and and the food keeps coming and coming and coming.
And so I don't know how long it goes on. For weeks, months, whatever it is. And in your worldview things are pretty good. You get fed every day. This guy that comes out and feeds you is obviously your friend right? By your worldview, he keeps feeding, it keeps feeding, it keeps beating. Then of course Thanksgiving Day comes up and you get your head chopped off. So in the world view of the Turkey he was, you know, the Turkey was collecting evidence right?
There was this observation that he had every day that he kept getting fed and so this person, According to him, according to these observations, according to this evidence, is that life is good is that there's going to be food the next day. There's going to be food the next day. I can make that kind of prediction right. That's the induction, right? You have premises. You lead to a conclusion. The grand conclusion is this prediction that there's going to be food the next day.
You know, if you want to back that up with inductive argument, you say, well? I have evidence I have evidence that I get food every day. I had 10 days ago. 98 years ago. There's always food. There's lots of evidence to support the inductive argument that tomorrow there will be food. Thanksgiving comes, and that's obviously not the case, and so he gets his head chopped off. So this is the problem of induction.
It doesn't matter how many observations you make, all it takes is one instance of its opposite. To prove it wrong, and so there is this fundamental asymmetry between how conclusively you can verify something. And how conclusive Lee you can falsify it, right? So you can't conclusively verify it doesn't matter how much evidence you collect, because all it takes is one. Piece of evidence moving in the opposite direction to prove that whole thing wrong, so you're never going to prove.
In the case of the Turkey that you're definitely going to get fed the next day, you can't definitely conclude that. But you can definitely conclude. That that's not the case at the butcher comes in chapter head off, so this is a symmetry between how you verify something in what we call falsify something, so that idea of falsification. Let's bring that into science now. So this brings us to Popper. Karl Popper, believed by many to be the greatest philosopher of science of the 20th century.
It largely regarded so he dealt with this problem of induction specifically with respect to science. So Karl Popper was interested in what we call the problem of demarcation. How do you know when something is science and is not science? And this is like super relevant, right today. People will always try to claim something is scientific, and other people might jump in and say, no, that's not really scientific. What is and isn't science? How do we know that's called the problem of demarcation?
How do you demarcate between what is science and what is non science? Soap opera takes a stance that induction is not actually the way science works. So if you think about Learning Science in high school and they probably said something along the lines that science works by induction, right? We go from the specific to the general and that makes sense. You make these observations. It's obviously just a slice of reality.
Use that you collect it and then you go to make an argument about something broader, right? You're going to take this evidence. You're going to formulate it into a theory and say, OK. This is the theory of, you know something in biology or something, chemistry, physics or whatever it is. You're doing induction because you're only grabbing your experience of what you've observed. But then you're making a grand claim about something, so it seems to make sense that science works by induction.
But Papa says that's not the case. How it actually works is via falsifiability. Soap opera places the notion of induction with falsifiability. So what is falsifiability? Well, it's this. Instead of collecting evidence to try to support some argument, that's inductive. You only collecting evidence to try to refute.
Or falsify the current theory so whatever it is that you're engaged in in science, there's some current theory about it and what he's saying is that the only point of collecting evidences to refute or to falsify that existing theory? That's the point of collecting the evidence, and so we can see that this is quite different than induction, because induction. If we remember that as a structured argument were saying, it's either going well or not, right?
Is it the strong arguments are weak argument? It's a stronger argument then you're kind of backing up the premises. Remember how we essentially acquainted facts? To premises to evidence, if I want to make a stronger inductive argument, I'm going to get better evidence, right? I'm going to keep collecting that evidence to support the thing that I'm talking about to support the claim that I'm making. And it seems to make sense that science would work like that.
I'm going to get more evidence in more evidence to support, you know, my theory or an existing theory. Let's keep strengthening it, but that's not really how science works, or at least that's not how it should work. The point of collecting evidence should be just to refute anything that exists. In other words, the strength of a model. Or theory. Is more about its survivability than a bunch of evidence used to support it. Is what model survives? What theory has survived a number of?
Evidence is right. A number of attempts to refute it. If it's still around, after many attempts, the natural makes a good scientific theory. If you collect evidence that counters whatever the theory supposedly predicts. Or or suggests then you replace it with a new theory eventually, right so that you collect evidence to refute the existing one. If we refuted and often you replace it, and so is this process of replacing the incumbent theory right?
Replacing the leading model with something different when you have enough counter evidence to suggest that the the one that's currently held is not correct, so the strength of a model. Is in its falsifiability a good model is one that can be falsified right? And so This is why when people try to suggest something is science and you kind of get the sense, I don't know if that's really science. They're probably telling you something that's non falsifiable right?
You can't just put forward something that you could never want to experiment on and try to test. You have to be able to falsify it. It doesn't mean you will falsify it, but it has to be testable. You have to be able to collect evidence to suggest that it does not work in the strength of a theory. The strength of any scientific theory is really lies in its falsifiability. How easily can it be falsified, right? You want it to be false right?
Because there is no knowing if a scientific theory is correct and more evidence does not support its existence. It's kind of hard to get your head around because we think that as we collect more evidence, isn't that supporting the existing theory. But that is not the purpose of evidence. The purpose of evidence is only to refute what exists.
So In other words, the practice of science is supposed to be go make your measurements, keep collecting evidence so that you can continually try to knock down the incumbent or leading theory. You're trying to knock it down. And so if 10 years later, after trying to knock something down, it's still there. Then we consider that a strong model, not because a bunch of evidence has been collected to support the model, but because of all the evidence has been collected.
None of it has been able to knock it down. Can you say that again, this is really important. If a model has been around awhile, let's say it's general relativity by Einstein, right? Leading theory about gravity. It's not because a ton of evidence has been collected to support it. It's because of everything that has been collected. Nothing has refuted it. Now I'm not getting into whether or not scientists truly follow this approach.
I mean, how many scientists are collecting evidence for the sake of knocking down a theory, and how many are doing it to try to support their narrative. That is the topic of a different episode. How well science is or is not going. There are issues in how science is practiced today, or how it has been practiced. But when we take a look at the philosophy of Science and we really pick apart what evidence is supposed to be.
Then it should be something that attempts to falsify that which exists, and if whatever theory we have has yet to be falsified, then that is considered strong. That's how evidence works. So at the heart of the problem of induction is this logical asymmetry? This logical asymmetry that exists between verification, falsification? We can never totally verify something, but you can totally falsify something because all it takes is one observation to do it. All it takes is one observation to do it.
OK, now I'm not commenting here on how much of those opposing observations are needed to conclusively falsified, because of course you have to trust the falsified measurement itself, right? In certain situations, you know, like the Turkey example, that's pretty clear cut and science. You might say I didn't observation where, for example, something does travel faster than the speed of light, and so maybe relatively strong generally must be wrong and you said, well, did you get that?
Did you get that measurement corrected? You really observe something that travel faster than light, so it's going to get a little more complicated than just. You observe something and it immediately falsified the leading theory. But the point is, there is this logically symmetry between the two. You're never going to reach a point. Of true verification, but you can definitely reach a point of falsification if you have enough counterevidence.
You have to replace the theory because it's just not standing up to the new evidence, so that is the purpose of evidence. So what we can say is that the problem of induction is so bad that it's kind of useless. Induction is kind of useless, and it doesn't really get used.
There's going to be some debate around this, of course, but that's that's you know what pauperism is about, and that's what we're going to get into some other philosophers and thinkers about this, but that's really what it's saying is that the problem of induction is so bad that it's actually not. Really, how things work? I mean, in some sense can't be because there's so much epistemic uncertainty around things, particularly in complex situations, complexity is degrading.
How well induction works. Let's think about prediction between simple and complex. If it's a simple system with, there's not a lot of pieces interacting, so it's not the amount of unknown nassor uncertainty in this situation is not that great. Then you can make. Pretty straightforward predictions because the you have a lot of transparency into how things are operating. You could look at how the pieces come together to produce whatever it is you're looking at.
Whatever it is you're observing, so it's a simple system. Look at all the pieces, work and then I can probably make a pretty decent prediction, but under complex situations where you have that fundamental capacity, you can't. You don't have the resolving power to see how things get pieced together to produce what we're looking at right then. The prediction is much harder to make an at some level of complexity. It might just become completely unwarranted altogether.
In other words, you just should not be predicting in this complex realm. OK, so things like physics and chemistry and aspects of biology, if the complexity isn't too bad, too high, then you can make maybe some decent predictions. OK, but you start getting into other aspects of biology into sociology and psychology and things like that with the complexity is so high than any prediction you try to make becomes highly suspect and will look at some examples.
In the third section we start looking at specific debates will take a look at COVID-19. Now this is a big issue in the news right now. You know you've got this epidemiologist trying to make predictions about other people. Try to make predictions and pretty much all of them are seemingly wrong. They just don't seem to be doing that well. That's not that surprising, right? This is a very complex area and the question is, should we even be doing predictions.
So we're going to look at a debate about that very issue. Should we even be doing predictions to make decisions about policy? Is is it worth doing is worth doing it even when it's wrong, or should we be doing it at all? So we'll take a look at that, but that's the whole point. Is that in simple systems you can do. Predictions with a decent degree of accuracy but in complex situations that starts to degrade rapidly. OK, so will touch on that a little bit more in a bit.
I want to do another example that's related to the Turkey problem, something called Black Swan theory. This is a term that comes from the same Nicholas Taleb, now the term Black Swan itself was coined way back in the day to basically just means something that was presumed not to exist. So out of all the history books at the time and anything that you ever saw, it was always a white Swan weights, one waits one White Swan. There's no such thing as a Black Swan.
So if you refer to something as a Black Swan, you're basically saying it didn't exist. And and that was true right through 16th century London, it just became a statement of Impossibility and then eventually in 1697 Dutch explorers discovered. Apparently Black Swan, and so the term Black Swan, kind of Meta Morphis to contact the idea that I perceived impossibility might later be disproven. OK, and so and so to say, Black Swan.
It's a really rare event that you didn't think existed, but all of a sudden does exist and then it goes against everything that you believe up to that point. So you should be able to see the overlap between the problems of induction, right? You've got this, a signature. You keep collecting evidence.
You think something is just all of a sudden something pops up and it essentially falsified everything you believed to that point and so seem to have talks about this in his book, fooled by randomness as well as the book titled The Black Swan. It was originally kind of talking about it with respect to financial markets, but Allah believes that. That really all major scientific discoveries in historical events fall under this artistic accomplishments.
Everything he believes that the progress is black Swans, it's undirected, is unpredicted, something that you didn't see coming happens. And then that's what replaces, essentially what was ever there before. So it's very similar to to what we hear about in prison, right? And I'm not equating that. I'm not saying exact same thing, but they have this similarity with respect to the problem of induction.
So what kind of differentiates the Black Swan Theory over something that we've already heard in Popper ISM is that it's unexpected events. Of large magnitude OK and it's a consequence that they because of that big magnitude, they have a dominant role in history and add to that what people end up doing is that we humans later commit yourself with these events, were explainable in hindsight. OK, so this kind of narrative fallacy that we put around it. So something that we didn't see coming.
All this happened. And then we kind of rationalize it in hindsight and say, OK, well now I know why that happened. Blah blah blah and so, but I'll be sending another. That's all false narrative that you're not going to know when a black one happens. You're not going to know what it is.
But when it does, it's going to have a high impact and it's going to play a dominant role in history is going to be where the change comes from an nothing particular about it is our rationalization of it after the fact. OK, so up to this point. We've been talking about evidence in the sense that it's not what we tend to think it normally is. It's not something you keep collecting in order to support the strength of an inductive argument, right?
It's actually just something you keep collecting to try to refute that which exists. And if that which exist, can't be refuted. And that's what makes it strong. So evidence is not something you keep collecting to support your narrative. And it's a real problem to interpret or to think of evidence in that fashion. There's another big problem that we haven't dealt with yet.
When you think that evidence is something you keep collecting to support a narrative, to support an argument, the other problem is, and you may have heard this phrase before. Absence of evidence is not evidence of absence. Absence of evidence is not evidence of absence. So what that means is that if you think evidence is this good thing, you keep collecting to support your narrative to support your inductive. Argument then you are also likely to think that if you don't see the evidence.
That that must mean whatever it is you're looking for does not exist. So let's say we're talking about whether or not you should wear masks for COVID-19, right? That's something that's in the news all the time. It's been politicized, unfortunately, and people take different sides, but let's just say that we're talking about trying to debate whether not, you should wear masks.
Now, if you were to take the what you might think is the scientific approach, this idea that we're going to frame it in terms of induction, right? The specific to the general, and we're going to structure an argument whereby we go collect evidence to support our conclusion. So let's say somebody says you really should wear mask. That's the big statement. Their grand conclusion. And so you ask them to back that up.
Right now we talked about the first part, so you should support what you're saying and add some premises to back up what you're saying that we should wear masks and they go to do that and they realize that they can't find much evidence for that right now. I'm not saying this is true and I'm just saying, let's hypothetically say that you go search for that and you can't really find evidence to support it now. If that's the case.
Then then the other person might say, well, if you can't find any evidence that masks make a difference, then I guess that means that don't make a difference, right? Because presumably people are out there doing studies and search, and they can't find. I mean, imagine people are doing these studies are trying to see if mass make a difference. That's an important thing to know nobody can find.
Let's say that evidence there's no conclusive kind of research, or at least agreed apon type of research that says, oh, look, we've shown that mass definitely make a difference. So the person arguing. But you should wear mask. Can't seem to supply and you could evidence they can't supply the premises to back up that big conclusion. So then the other person might say OK.
Well, since you can't find any evidence to support the conclusion that masks make a difference, then I guess they don't make a difference because we know people are trying to find it, they are actively working hard to find that quote, Unquote proof, right? Those facts that evidence those premises that you could use to back up this argument that masks do make a difference and nobody can find it. And maybe they've been searching for awhile and they just can't find it.
So they're saying that absence of evidence is evidence of absence. The fact that you can't get the evidence you need to support the one conclusion is itself evidence that the opposite conclusion must be true. So if you think your slice of reality is close to complete, or that you have a really good understanding of the domain that you're in OK of the world, your model is capturing a lot of what something is. So this is virology. Let's say it's Epidemiology and people are studying the virus.
And they think they really understand how viruses work, and maybe this particular virus have a good understanding how it propagates, multiplies and and the transmission and all this. They have a good understanding of the dynamics of this virus. Let's say they assume that. Then they're going to think that the absence of evidence could easily be evidence of absence, because they think they have a good slice of reality.
They think they understand the situation and therefore they would assume that they would be able to find what they're looking for, because the tools are able to find things within this domain, they have the right tools, they have the right understanding. They have the right models, right? Even if they're incomplete, they think they have a good slice of reality.
So if you believe that and you go hunting after something, you do a significant significant amount of work to try to find something, and you don't find it. Then you might believe you're justified in thinking that at some point the absence of evidence really is evidence of absence. You believe that the instruments you have at your disposal would have detected the phenomenon of interest. If it were there.
So this actually falls under one of the logical Fallacy's, We call this argument from ignorance. Or sometimes it's called appeal to ignorance. When you make an argument from ignorance, you don't allow for the possibility that the answers are Noble or it's only Noble in the future. Or maybe it's completely true, completely false. There's these other possibilities that you're not allowing for it. Just because you have an absence of evidence. Does not mean that that is going to be evidence.
Of absence Carl Sagan called this impatience with ambiguity, right? Just because you don't know doesn't mean you can take the opposite stance, right? It takes time. To maybe know the answer, or it's just unknowable and you have to accept that each side of the bacon try to bring in premises. You might say, well, there's gotta be. Some evidence is something you can do. We can look at these curves, we can say, OK, this country said they were wearing a bunch of masks. These ones that they didn't.
Then we can look at the curves and try to compare them, but you're probably going to find you know. Facts that support either narrative or maybe you really get into it and you show OK? Well there's a study and they looked at you know, the particular dejection that happens when you cough with a mask and without. And if you do it with the Mass, you can see that there's less particulate matter ejected in. It doesn't go as far or whatever does get injected, and so there you go.
There's evidence that masks make a difference, right? Somebody costs with a mask on the particular matter. You know, the droplets of water. They don't go as far, they don't have as much reach, and presumably this thing is being transmitted by air, so masks must make a difference, but somebody can. Do another study and they might find that.
Well, actually, if you wear a mask's mask, the droplets of water end up getting dispersed into finer droplet sizes and maybe those finer droplet sizes linger in the air longer, and so actually it could make it even worse. If you do where I'm asking on it so. Again, you're you're. You can bring. We talked about this earlier when we talked about using isolated facts and patching it together to suit your narrative.
You can take facts that are indeed true or have some element truth to them, and you can stitch together a narrative that suits you, know the particular conclusion. The conclusion that you're trying to drive towards. And so this brings us back to the problem of induction under complexity, right? This is this epistemic uncertainty coming into play? The more complex the situation, the lesser going to know. At some point, you're going to be basically in the realm of a Noble.
And so the question is, if you're in that realm, how do you make decisions? If you want to make policy is if you want to make guidelines. If you want to know what to do, should we wear a mask or should we not? What are we supposed to do about that? Because logic alone. Is is telling us either OK? Well one let's structure it as an argument and so we can kind of follow the rules of induction. And now we're asking too too.
We're being asked to provide premises to provide evidence to support a particular particular conclusion, but we immediately run into problem if we can't find any evidence at all, then we can try to take the opposite stance. But now we have a logical fallacy on hand, so it's not really following the rules of logic anyways. And if we do find some information, it can support either narrative, right? Because it's complex and you can your stitching together.
It seems to be true facts, but how you interpret that stitch together set of facts is a narrative that can go either way, and so now we're we're talking about a situation where the epistemic uncertainty is high and it's exacerbated by complexity, and so when we are in truly complex situations, like Epidemiology, when we're dealing with particularly this virus that we're dealing with now, there's all kinds of uncertainty around it.
We don't really know exactly how it transmits or how long it lasts on surface is exactly what the rate of transmissibility is. If you're asymptomatic, well, you definitely become symptomatic. And now there's all kinds of uncertainty. And why is the uncertainty there? Because like anything else, it's a complex situation, as many people involved, there's a virus we don't know much about. It's got these multiplicative Growth Dynamics that explode exponentially. There is this fundamental wall.
This opacity that we cannot resolve past and we're just never going to get access to information so. What do we do if the situation was simple, then it would be something we could probably add up with logic, right? Because you could say OK, here's what we know and hear the premises and we trust them because we assume that those premises have a good proximity to reality and so we can accept them or not. We can accept reject and we can see how conclusions are supported or not.
We can have strong arguments, we have weak arguments, we can stack 'em up side by side and then it works, but under complexity we run into the limitations of And that's what wraps all this back into the second part here is that we're talking about the restrictions of the limitations of logic, and now we can see why it's happening, because the epistemic uncertainty increases in complex situations. You know the reality is people are going to go find evidence that masks do make a difference.
They're going to find evidence that they don't make a difference there. Going to stitch them together and all kinds of narratives, and then people are going to interpret those narratives in different way different ways. And that's where we are. And so it's a complex situation. There is no ultimate resolution to that. No scientific model. Is going to be able to completely answer to this.
It's too complex of a situation, or if it can answer, it is going to be in the future and at that point you know, maybe it's too late to make the big decision whether not to wear masks right? Because the thing took off, but we want to make guidelines. We want to make policies we want to be able to rely on science to inform us. So what do we do? We can't use pure logic because it's in some sense too simplistic, right?
And if we jump into these complex situations that we try to model them, we we try to be really scientifically try to structure things around inductive arguments. Then we've got this problem of induction where you can't really know. And the problem of induction gets so bad under complexity that it pretty much becomes useless. Or induction does specifically, and So what is the resolution here?
You want to structure things around arguments 'cause there's a power to But there's a severe limitation to our ability to do it, because any premise that you bring to bear on your argument is going to have a massive amount of epistemic uncertainty under complexity. So a lot of people have a sense that logic is limited, and that emotion and context come into play.
A lot of people don't take the time to think about what that mechanism is, so hopefully I've been able to kind of provide that mechanism in this section to show how and why. Logic is limited as things get more complex, so we're over an hour and 40 minutes at this point. So here's what I'm going to do I still got two sections I want to cover, right? I want to again part three of this episode is going to look at actual examples of debates, and then we're going to look at both sides.
How does logic work? And then, how do we run into the limitations of logic based on the stuff we've been talking about? And then in the 4th part, we're going to look at the resolution to it all. How do we actually meet in the middle? You know? How do we reach a point of mutual respect where you can agree on some points and know that you don't agree on others?
But you can still reach a resolution where you can move forward, so I think we've got a really strong foundation to do the 3rd and 4th part. So what I'm going to do is stop the episode here. This will be part one or release this now. Stay tuned. I'll release Part 2 shortly and that will be where we cover the 3rd and 4th parts, so I hope everything made sense in this episode. I hope you enjoyed that. If you have any questions, reach out.
You find me on Twitter and stay tuned for those last two parts in Part 2 of this episode. I'm looking forward to it. I think you should too. Will see you soon.