#111 Nerdinsights from the Football Field, with Patrick Ward - podcast episode cover

#111 Nerdinsights from the Football Field, with Patrick Ward

Jul 24, 20241 hr 26 minSeason 1Ep. 111
--:--
--:--
Listen in podcast apps:

Episode description

Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!


Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!

Visit our Patreon page to unlock exclusive Bayesian swag ;)

Takeaways:

  • Communicating Bayesian concepts to non-technical audiences in sports analytics can be challenging, but it is important to provide clear explanations and address limitations.
  • Understanding the model and its assumptions is crucial for effective communication and decision-making.
  • Involving domain experts, such as scouts and coaches, can provide valuable insights and improve the model's relevance and usefulness.
  • Customizing the model to align with the specific needs and questions of the stakeholders is essential for successful implementation. 
  • Understanding the needs of decision-makers is crucial for effectively communicating and utilizing models in sports analytics.
  • Predicting the impact of training loads on athletes' well-being and performance is a challenging frontier in sports analytics.
  • Identifying discrete events in team sports data is essential for analysis and development of models.

Chapters:

00:00 Bayesian Statistics in Sports Analytics

18:29 Applying Bayesian Stats in Analyzing Player Performance and Injury Risk

36:21 Challenges in Communicating Bayesian Concepts to Non-Statistical Decision-Makers

41:04 Understanding Model Behavior and Validation through Simulations

43:09 Applying Bayesian Methods in Sports Analytics

48:03 Clarifying Questions and Utilizing Frameworks

53:41 Effective Communication of Statistical Concepts

57:50 Integrating Domain Expertise with Statistical Models

01:13:43 The Importance of Good Data

01:18:11 The Future of Sports Analytics

Thank you to my Patrons for making this episode possible!

Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew...

Transcript

Today's episode takes us into the dynamic intersection of Bayesian statistics and sports analytics with Patrick Ward, the Director of Research and Analysis for the Seattle Seahawks. With a rich background that spans from the Nike Sports Research Lab to teaching statistics, Patrick brings a wealth of knowledge to the table. In our discussion, Patrick delves into how these methods are revolutionizing the way we understand player performance and manage injury risks in professional sports.

He sheds light on the particular challenges of translating complex Beijing concepts for coaches and team managers who may not be versed in statistical methods but need to leverage these insights for strategic decisions. Patrick also walks us through the practical aspects of applying Beijing stats in the high -stakes world of the NFL.

From selecting the right players to optimizing training loads, he illustrates the profound impact that thoughtful statistical analysis can have on a team's success and players' For those of you who appreciate the blend of science and strategy, this conversation offers a behind -the -scenes look at the sophisticated analytics powering team decisions.

And when he's not dissecting data or strategizing for the Seahawks, Patrick enjoys the simple pleasures of reading, savoring coffee, and playing jazz guitar. This is Learning Bayesian Statistics, episode 111, recorded June 19, 2024. Welcome to Learning Bayesian Statistics, a podcast about Bayesian inference. the methods, the projects, and the people who make it possible. I'm your host, Alex Andorra. You can follow me on Twitter at alex .andorra, like the country.

For any info about the show, learnbasedats .com is Laplace to be. Show notes, becoming a corporate sponsor, unlocking Bayesian Merge, supporting the show on Patreon, everything is in there. That's learnbasedats .com. If you're interested in one -on -one mentorship, online courses, or statistical consulting, Feel free to reach out and book a call at topmate .io slash alex underscore and dora. See you around folks and best patient wishes to you all.

And if today's discussion sparked ideas for your business, well, our team at Pimc Labs can help bring them to life. Check us out at pimc -labs Hello, my dear patients, I have some exciting personal news to share with you. I am thrilled to announce that I have recently taken on a new role as a senior applied scientist with the Miami Marlins.

In this position, I'll be diving even deeper into the world of sports analytics, leveraging Bayesian modeling, of course, to enhance team performance and player development. And honestly, this move is so exciting to me and solidifies my commitment to advancing the application of Beijing stats and sports. if you find yourself in Miami or if you're curious about the intersection of Beijing methods and baseball or team sports in general, don't hesitate to reach out. OK, back to the show now.

Patrick Ward, welcome to Learning Bayesian Statistics. Thanks for having me. I listen to every episode. I think every year at the end of the year, Spotify tells me that it's one of my highly listened to podcasts. So it's pleasure to be here. Hopefully. I don't know if I can live up to your prior. You've had some pretty big timers, but yeah. No, yeah. So first, thanks a lot for being such faithful listener. I definitely appreciate that.

And I'm always amazed at the diversity of people who listen to the show. That's really awesome. And also I want to thank Scott Morrison, put us in contact. Scott is working at the Miami Marlins. He's a fellow colleague now. That's change for me, that's great change. I'm extremely excited about that new step in my life. But today we're not going to talk a lot about baseball. We're going to talk a lot about US football.

So today, European listeners, when you hear football, we're going to talk about American football, the one with a ball that looks like a rugby ball. And so Patrick, we're going to talk about that. But first, as usual, I want to talk a bit more about you. Can you tell the listeners what you're doing nowadays? So I gave your title, your bio in the intro, but maybe like tell us a bit more in the flesh what you're doing and also how you ended up doing what you're doing.

Yeah. Well, currently I'm at the Seattle Seahawks, which is one of the American football teams in the NFL. And I'm the director of research and analysis there. So we kind of work across all of football operations. So everything from player acquisition, front office type of stuff to team based analysis and opponent analysis.

And just kind of coordinating a research strategy around how we attack questions for the key decision makers or the key stakeholders across coaching, acquisition, even into player health and performance and development and things like that. And I got here, this is my 10th year, I got here from Nike. So I was at Nike in the sports research lab actually working for nearly two years as a researcher.

And the way that I got was I was doing some projects for Nike around applied sports research and they had just at the time, I think they had just become like the biggest sponsor of the newly minted National Women's Soccer League. And they said, we want to do something around this. And so, you we were kind of kicking around ideas.

And one of the ideas we had was what if we went out and we tested all of the women in the league, like tested them sprinting and jumping and power output and things like that. And then we could basically build like archetypes and that would be useful for, you know, like apps in your watch and on your phone and girls could in the field could compare themselves to their favorite athletes and stuff. So they let us do it.

And they sent me on the road for an entire off season, the entire off season training of the national women's soccer league. went around the country to every single team, myself and four colleagues, and we tested. every woman in the league. And so we had the largest data set on women's soccer players that anyone could have. So we did some conference presentations and things like that with that data.

And lo and behold, Nike was there and the Seattle Seahawks called down to Nike and said, hey, we hear there's this test battery and we'd love to see what our players do on it. And so I went up and I did a project for them around that. And then they kind of just said like, what if you just did this kind of stuff all the time? And so that's how I started out 10 years ago. And I basically started out just in applied physiology, which was my background.

And I was doing like wearables, wearable tech for the team, like GPS and accelerometry and things like that. And then that kind of progressed into draft analysis and player evaluation and things like that. And it just kind of growing until Yeah, 10 years later, here we are. Yeah, that's a great, yeah, it's it's a great, uh, background.

love it because I mean, definitely it seems like you've been into sports since you are, uh, at least a college graduate, but also there is a, uh, a bit of randomness in this. So sorry, love that. Uh, of course, as a, as a fellow Bayesian, always, always interested in. in the random parts of anybody's journey. Actually, how much of the Bayesian stats do you have in that journey and also in your current work?

How Bayesian in a way is your work right now, but also how were you introduced to Bayesian stats? Well, mean, anyone who has watched American football knows it's a game of very, very small sample sizes. So we only play up until two years ago. We play 17 games now. We used to only play 16 games. So unlike most of the other sports, baseball has got 162, several hundred at bats, basketball, hockey, 82 games. many attempts.

Also the players in a lot of these sports are all doing the same things in baseball. Aside from the pitchers, everybody's going to go to the plate and hit in basketball. Everybody has a chance on the court to dribble the ball, shoot, score, pass, get assists, get blocks, et cetera. Football is really unique because it's a very tactical game. There's discrete events in terms of plays, stop and start.

But because of the tactical nature of it and one ball, there's only certain positions that touch the ball. There's only certain opportunities that players are going to have. So that was always an issue. And when I did my PhD, a big part of my PhD was using mixed models to look at physiological differences between players on the field with GPS and accelerometry.

And I always thought of mixed models, even though I didn't know it at the time, because I hadn't really... learned anything Bayesian yet. I always thought of mixed models. I think of them as like this bridge to Bayesian analysis because you have these fixed effects which behave like our population averages, our population base rates, I guess you could say.

And the random effects are sort of like, hey, we know something about you or your group and therefore we know how you deviate from the population. And then with those two bits of information, we're also like, hey, here's someone new in the population, or maybe someone that we've only seen or observed do the thing one time. best guess therefore is the fixed effects portion of this until proven otherwise.

So I always had that in the back of my head going through this, but you know, my first two or three years in the NFL, we always just used to kind of throw our hands up when we see small samples, we'd be like, yeah, it's this, it's 50%, but it's such a small sample, we can't really know. And we didn't really have a good way of like sorting out what to do with that information.

Because as you know, know, something like one out of 10 and 10 out of a hundred and you know, a hundred out of a thousand, those are the same proportions, but different levels of information are contained within those proportions.

And I stumbled upon a paper, it was like a 19, I think it was like 19. 77 or something by Efron and Morris and it was called the Stein's paradox and I probably stumbled on it because I was like You know, there's so much in Saber Metrics someone in baseball has probably figured this out before and so I I was probably googling something like Small samples baseball statistics Saber Metrics blah blah blah and I stumbled upon this paper about Stein's paradox and The crux of the

paper was if we observe these I think it was 12 or 18 baseball players through the first half of the season up to the All -Star break. And we see the number of times they went to the plate and what they're, you know, and the number of times they hit, we have a batting average. If we take the observed batting average through the first half of the season, how well does that predict the batting average at the end of the season? Meaning now they've gone through the second half.

And You look at that and you're like, okay, let's, you know, what's this all about? And so the first thing they do is they set up this argument that like, well, that doesn't do a very good job because some of these players batted, you know, five times or three times, certainly a player who went three for three has a hundred percent batting average. We don't think this is the greatest baseball player of all time yet, because we've only seen them do this thing three times.

So, the basic naive prediction of using the half first half of the season to predict the second half wasn't very good. And so in that paper, they introduced this kind of simple Bayesian model of saying, well, we know something about average baseball players. What if we weighted everybody to that?

And lo and behold, that did a bit better of a job constraining the small sampled players who had these, you know, a guy that goes 0 for 10, which is totally possible in baseball when you have hundreds of it bats. We don't think that's the worst hitter in baseball. so, you know, constraining those players told them something about what they expected to then see at the end of the season. And so through that paper, then I found this blog by David Robinson, who's an R programmer.

And it was all about like using empirical Bayesian analysis for baseball. And then he made it into a nice little book that you could buy on Amazon for like, I don't know, $20 or something. You know, and I read those two things and I was like, this is incredible. This is exactly what I've always wanted to know. And so like I went in the next day to our other analysts at the time, there was only two of us. And I said, I think I figured out a way we could solve small sample problems.

And, and that was it. Like then after that, you really couldn't convince me otherwise that this wasn't a great way of thinking. That doesn't mean that everything we do has to be Bayesian. Certainly like there's other things that we do that are used. you know, different tools like machine learning models and neural networks and things like that. But certainly when we start thinking about like decision -making, how do I incorporate priors, domain expertise? How do I fit the right prior?

You know, like if you went 0 for 5 and you're first at bats, let's say in baseball, but you were a college standout and you were an amazing player in the AAA, I probably have a stronger prior that you're maybe a slightly better than average baseball player than if you went 0 for 5 and you were a horrific college player and you weren't very good in AAA and you were really the last person on the bench that we needed to call. And maybe that prior is much lower.

so utilizing that information in order to help us make decisions going forward, that's really That was kind of the money for me. And so how much do we use it? mean, if we have a new analyst start, one, you know, one of our new analyst starts, started two years ago. I think the first thing was like, how much do you know about BASE? And it was like, well, I never really learned that in school and blah, blah. And it was like, okay, here's two books. Here's a 12 week curriculum.

We're going to meet every week and you're going to do projects and homework and reading. And that was it. Like, it was like, you have to learn this because this is how we're going to think. And this is how we're going to, process information and communicate information. Well, what about that? I told the listeners that we were not going to talk a lot about baseball, but in the end we are. It all comes back to baseball, think. Yeah, in sports analytics, all comes back to baseball, Certainly, yeah.

Yeah, okay. If I understand correctly, was motivated a lot by low sample sizes and being able to handle all of that in your models. That makes a ton of sense. As a lot of people, I've seen a lot of clients definitely motivated by a very practical problem that you were having. I mean, most of people enter the Beijing field through that.

Something that I'm actually very curious about, because like I could keep talking about that for hours, but I really want to dive into what you're doing at the Seahawks and also, you know, like how Beijing stats is helpful. to what you guys are doing.

I think it's the most interesting for the listeners who understand basically how themselves could they apply patient stats to their own problems, which are not necessarily in sports, but I think sports is a really good field to think about that because you have a lot of diversity and you have also a lot of somewhat controlled experiments. You have a lot of constraints and that's always extremely interesting to talk about that.

Maybe you can start by basically explaining how patient stats are applied in your current role for analyzing player performance and injury risk. Because now that I work directly in sports, something I'm starting to understand is that really player projecting player performance and also being able to handle injury risk are two extremely important topics. So maybe let's start with that. What can you tell us about that, Okay. Let's see. Which one should I start with?

I guess I'll start with injury risk, I suppose. Injury is like... I mean, this is like a super difficult problem to solve. You know, I've written a number of papers on those. think you can link to my research gate. And there's a number of methodology papers that we've written that have looked at things like this. And I think it's complicated because one, there's like a ton of inter -individual differences as far as why people get hurt.

There's a ton of things that we probably, you know, don't know they're important yet because we can't measure them or we at least can't measure them in the real world applied. setting, maybe in a lab you can. And then there's other things that we just don't know because we're like, it's a epistemic problem. Like we're just stupid about it. We're naive that there's other things out there that maybe we're just unaware of yet. And so it's a really hard problem to try and solve.

So when I see papers that basically come out and say like an injury prediction model and they're estimating prediction as like a one or a zero, like a yes or a no, like a binary response, and they give a nice little two by two table and they talk about how well their model did. I'm always like, I don't, how is that useful to the people who actually have to do the work?

Because in reality, what we're dealing with is it's probably not unlike a hedge fund manager managing the risk of their portfolio. And if you think of each player, or each athlete that you deal with as a portfolio, they each have some level of base risk. So if we know nothing about you, you really have to have a pretty good handle in your sport of what's the base rates of risk of injury for position groups and players of different age and things like that.

So that might be an initial model, right? And then from there... The players go out and they do things and they play and they perform and they compete and they get dinged up and they take hits and they get, you know, hit by hit by pitches or they get tackled really hard or things like that. And we collect that information and we're basically just shifting the probabilities up and down based on what we observe over time. And when that probability reaches a certain threshold.

And of course you could use a posterior distribution. So you have an integral of like how much of the probability distribution is above or below a certain threshold. Then you have the opportunity to have a discussion about when to act or what to do. And how you act and when to act is going to be dependent on your tolerance for risk or your coach's tolerance for risk.

If it's your best player, if it's the MVP of your team and it's week two of the season and the risk probability, or let's say we're using this as a model.

Some of the stuff that you mentioned Scott earlier that we've worked on is like return to play type of models where it's like, okay, the athlete has, you know, saw an ankle sprain and we're there rehabbing And we have a, you know, we have a test or several tests, a test battery that tells us where that athlete is on their return to play timeline.

Um, let's say it's week two of the season and we say, well, there's a, you know, the probability distribution, the posterior distribution looks like this. Here's the threshold that we'd feel comfortable releasing this athlete back to full on competition. And there's a 30 % chance they're in good shape and there's a 70 % chance that they're below that threshold. In week two of the season, we probably want to say, you know what? Let's not take that risk this week.

Let's be a little bit more risk averse here because it is the best player. And let's wait till we have more distribution on the right side of the threshold. Alternatively, if it's final game of the season, it's the Super Bowl or the World Series or the Champions League final or something like that, you're going to probably take that risk because you need the best player out there.

And so when I think about injury risk modeling, what I really think about is how do we evaluate this individual's current status? on our sort of risk score or our risk distribution. And when do we feel like we need to intervene and do something? And when are we going to feel like, this is fine and continue training as is. And I think that's the tricky part. I don't think it's not easy. I don't think I've solved anything. I don't think anyone has, but...

Certainly from the perspective of our staff, we can all sit down with a performance staff of strength coaches and dieticians and strength coaches and medical people and sit down and have these conversations. And what makes it nice about using a Bayesian approach is that we can also take into account domain expertise that we might not have in the data.

So if we sit down on a Monday meeting and then we say, you know, this player, This is where they're currently at and this is their risk status, which I don't know, I don't really feel comfortable with that. How do you feel about it? And then one of the medical people say, you know, he's been complaining that his hamstring feels really tight and he's been getting treatment every morning.

Well, that's not data that we would be collecting, but that's valuable domain information that this individual who's working with the player now adds to this. And it's just like anything in probability. It's like if we two or three or four independent sources, all kind of converging on the same outcome, on the same end point, we probably need to feel really good about making that decision and saying like, hey, let's do something about this, let's act now, right?

So that's kind of how we, you know, that's how I think about it in that, you know, from that side of things. From the performance side of things, the development side of things, It's probably going to be, I mean, it'd be way different for you guys in baseball because you draft a player. You don't expect them to maybe get to the major leagues and contribute till 23, 24, 25 years old.

You know, for us, you draft a player and those are going to be the, you know, next year they're playing, they're ready, you know, they're in, they're in the mix. So in that regard, you'd be thinking of models that would probably be, in my head, I would be thinking of it as like models that are mapping the growth potential of an individual. How are they progressing through the minor leagues, which attributes matter?

And then maybe from there answering questions like what's the probability that this player makes 20 starts in the major leagues or starts for three seasons whatever end point makes sense to the decision makers, obviously. You know, for us, it's more about like player identification. And again, football is a, is a sport of small samples.

And so in their college years, some of these kids might really only be a starter or a full -time player in their junior and senior year, or maybe just their senior year of college. Additionally, you know, unlike, unlike the NFL where you know, at that highest level, the talent is much more homogenous. You get to the college football ranks and you have just this diversity of talent where you might have a big time team playing a really lower level opponent.

And so, you you have to adjust things, being able to hand off the ball to your running back who's playing against a very low level opponent. And he goes for 500 yards or something absurd, 200, 300 yards in a game. that has to be adjusted and weighted in some way because it's not the same as going two or 300 yards against a big time opponent. And the big time opponents are more similar to the NFL players that they're going to play against.

And so, you know, all of these types of things fit into models and hierarchical models and Bayesian models, which help us utilize prior information. And the other way that the Bayesian models are useful here You know, sometimes we're dealing with information that's incomplete because we can't observe all of the cases. You know, for example, in college sport, division one is the top division. You know, and then you have FBS and then they division two and division three.

So if you pull all the division two kids that have ever made it as a pro athlete, the list is very small. but they're kids that made it. And so if you were to just build a normal model on this, it would say like, well, the best players clearly come from these lower level schools because all of the ones that we have seen have made it, have been successful. And in theory, there's hundreds of thousands of kids from that level that have never made it. So we have to adjust that model in some way.

We have to weight that prior back down. Yeah, this guy is really, really good at that level. but our prior belief on him making it is very, very low. And you mean he'd have to be so exceptional in order to, and this is where like, oftentimes people rail on like, use weekly informative priors, let the data speak a little bit.

But there are times where in these situations where I feel like you could probably put a slightly stronger prior on this and be like, man, this guy's really gonna have to do something outstanding to get outside. of the distribution that we believe is on this just given what we know. Okay, yeah, that's very interesting. That's a very good point. Since I, yeah, related to survivor bias in a way. How concretely, how do you, how do you handle these kind of cases?

Is it a matter of using a different prior for these type of players or something Try to do this in a few different ways. One is you try and make basically like equivalency metrics, like saying if you did X at this low level, it in some way relates to Y at this other level. So you try and normalize players based on players that you've seen that have moved, say, between levels of the game.

so like, again, if you think about it from a baseball perspective, you know, hitting 40 home runs in AA baseball might be related to, you know, might be in some way convert to like 33 home runs in AAA and 24 home runs in the MLB or 12 home runs in the MLB or whatever it might be. Right. So trying to, identify equivalencies between those that we can then like constrain everybody. Other ways is just like, like you said, like putting a prior on it.

knowing the level that the person is playing at, you would have like a lower level of prior. For example, it's just like playtime. If I think about playtime and performance as sort of this, this kind of like rising curve that goes to an asymptote of some upper level of performance. The players way at the left who have very small number of observations, it would be silly to say that my prior for those players is the league average. There's a reason why they're not playing very much.

It's probably because people don't think they're very good, right? So somewhere in that curve, for each of those numbers of observations across whatever performance metric we're looking at, there's going to be a specific prior on that continuous distribution. And that's where I would, you know, that's where we would kind of draw a stake in the ground and say like, we probably think based on what we know that this player is closer to these players than he is to those players.

Okay, yeah, yeah, I see. Yeah, definitely makes sense. And yeah, yeah, like that point of play time already tells you something. Because if the player plays less, then very probably already you know you have information about his level.

And that means he's at least not as good as the A level players that play much more The only time you get in trouble with that is like an endowment effect where if you, you know, like in major league baseball, there's been some research on players who are drafted very high in the first round, second round get progressed up and through the minor leagues faster than players who were drafted lower, even if they don't outperform those players just because they're high as a consequence of being a

high draft pick. That one's a tricky one, but there has to be, at some point it's like actually, and this is where like, know, posterior distributions, you can really, I mean, it's almost like doing an AB test. Like we've got two players and what's the probability that this guy is actually outperforming the other guy, even though the other guy might've been, you know, a higher draft pick or something like that.

And so you try and at least display, you know, we try and at least display that visually and have those conversations. It's, kind of in my head, at least maybe I'm wrong, but a nice way of like helping people understand the uncertainty, you know, which is really important. always, maybe it's try, you know, I used to work with a guy who whenever I would present some of the stuff at work and he'd be like, stop doing that.

Like every, every time you present, you talk about like what the uncertainty and the assumptions and the limitations are, like just give them the answers. And I'm like, well, it's important that they know what the limitations and what assumptions are behind this because we can't, we don't want to talk past the sale and sell them on something that, you know, isn't really there. Like there's been times where I've had to stop someone and just be like, hold on.

This analysis definitely can't tell us that. Like what you're saying right now, it can't tell us that. like, let's not, let's not try and make this more than it is. And also just, you know, conveying your uncertainty. mean, that's just super important because It's really, really hard. I mean, we're all going to fail at trying to identify talent. It's really hard to identify why one player is going to succeed over another. so, you know, in some way it's not binary.

It's not a like, do you like this guy or not? Is he good or bad? Is this guy better or worse than the other guy? there's a lot of factors that go into why someone has success. And so I think conveying that uncertainty is really important. And obviously, the more observations that we have of you doing the thing, the more certain we are that this is your true level of performance. But it takes a while to get there. So we have to just be honest about that. Yeah, yeah.

I think that's actually related to something I wanted to ask you about also a bit more generally, you know, but the most significant challenges that you face when applying Bayesian stance in, in sports science and, and how you address them, because I'm guessing that you, you already started talking a bit about that. So, let's go there. And then, then I have other technical questions for you, but the kind of, of models and, and, and usefulness that Bayesian stance has in your field.

But I think this is a good moment to, to address these. questions. think the biggest or there's a few challenges. One challenge is not everybody is excited about a posterior distribution like you might be. Most of the time, they just want an answer. Tell me what to Give me the yes or no, make it binary. And so that's always tough. And you're trying to oftentimes convey this to non -technical audiences or people who are good at doing other things.

They're not math people or they're not stats people. And that's okay. So that always makes it challenging is why are you showing me this distribute? I don't understand what I'm supposed to take from this. Just tell me. What to do? Tell me which guy's better. Tell me which guy's worse. So that's always hard. And that takes a lot of patience and communication. For a while, we used to do just weekly sit downs with our scouts where we would teach them about like one stat a week. And we'd go slow.

And we'd also try and... as best as possible, relate things back to the currency that they speak in. And scouts and coaches, the currency they speak in is video, not charts and graphs. So the more that we can connect our analysis to video cut -ups, because then they can see it. And then they understand why a model says what it says or makes a decision or why it has assumptions.

And this is also super valuable too, because they give And they say, it's, saying that, you know, the model is saying that, this is, is the outcome, but I can see why it's because these four other things happen. It's like, wow. Well, we could probably account for that. And we never, I just didn't know it, right? That's why they're domain expert and, and, and I'm not. so. You know, the patience around communicating stats and numbers is always difficult and also knowing what people like.

When I first started, everybody would tell you, need to have, you know, got to have an amazing dashboard, got to have like charts and graphs, you know, and all that stuff. And what I found was there was a lot of people who were like, I don't, what do you, I don't even know what I'm looking at. Like, I hate these things. Just give me the table of numbers. It's like, okay. Well, maybe a table of numbers with just some conditionally formatted information.

And also, you know, I have an academic side, I do supervise PhD students and master students, and I do teach a master's class in statistics at college. So I guess what I'm about to say would, know, people on the academic side would hate it, but you have to like recognize the environment you're in. And sometimes just like changing the verbiage helps, like instead of calling things the... low credible interval and the high credible interval, like we just call it the floor and the ceiling.

And people are like, yeah, this guy's floor, it's a bit higher than the other guy's floor. And that guy's ceiling, this guy's got a better ceiling. And like, know, academically you'd get shot for that, it's like, those kinds of things go a long way because it brings the information to the end user. And if you want them to start to... take this information into their decision calculus, you have to get them comfortable. And sometimes it's just meeting them with terminology that helps.

And so I think that's a really, you know, that's a big one. Those are big challenges in communicating this stuff. Yeah, definitely. And I resonate with that. I've had the same issues. I'll be able to tell. talk more precisely about sports in a few months. But when it comes to a lot of other fields, whether it's marketing or biostats or electrical forecasting, yeah, the issues are related to these. They're also extremely diverse. So that's interesting.

You definitely don't have a one size fits all. Definitely what's extremely important basically is to know the model extremely well from my experience. And yeah, if you have coded the model yourself, you usually know it really well because you spent hours on it to try and get it to work and understand what it's doing. And when it's not able to do as you were saying, I think it's extremely important to be able to tell people what the model cannot tell you.

And yeah, I think these are extremely good points to try and balance what people are usually wondering about. And that's also where I think having the Bayesian model is extremely interesting, right? Because the Bayesian model by definition is extremely open box and you have to run it down your assumptions. And so you know much better what the model is doing than a black box model. Yeah, I mean, that's another good point is.

If you go into a meeting and you have model outputs and your only reason when asked, why does it prefer this over that? Your only reason is because the model said so. If people aren't going to be super excited about that. knowing why things are happening, know, this also, you know, I mean, this really plays into like how you validate and check your models. And so buildings, you know, we kind within that Bayesian sort of world, building simulations is a big part of it.

And building simulations to see how the model behaves under different constraints and different pieces of information, that's really important because it gives you useful context to talk about and it gives you useful information in order to head things off at the pass when you know there's gonna be some gotchas and some trouble if, you people have certain types of questions. You can head things off of the past because you're already aware of them.

Another thing that I do think is really useful in this and maybe in some of your prior work in consulting, I'm sure you've like stumbled on like, or used frameworks like crisp DM and things like that. Like in statistics, there's a PPDAC problem plan, data analysis and conclusion. Those types of frameworks help just because again, A lot of times we're dealing with non -technical audiences and they're trying to give you a question and say like, Hey, can we look at this?

And oftentimes these things are very vague and very sort of like, you know, not, not, not clearly defined. like, you know, my younger self would take that and run away and, know, do something for a week or two and then come back and be like, Hey, here's this thing, you know, and you ask about you know, they're usually like, the reply is, that's kind of cool, but I was thinking of it like this and I would do this with it.

it's like, man, if you, you know, if you told me that two weeks ago, I would have done something else. So using those kinds of frameworks, one, does a few things. One, it gives us the opportunity. Like I always tell our analysts like question the question, like, you know, question the question. Right? So when they have a question, I'm always sitting there and I'm like, okay, well, you know, what would you want to do with this? How, do you foresee yourself using it to make a decision?

What's the cadence that you would need to access this information? If I were to get it to you tomorrow, you know, what would you, what kind of decision would you want to make? Like really kind of Socratic questions, you know, question the question. And, that does a few things. One, we get, we get to two, you know, usually two different results. Both of them are good. The first is I get them to then walk through that five minutes with me and clearly define what it is they're looking for.

That's great. The other result is the opposite, but it's also a good result, which is we get about three minutes in and they go, you know what? I haven't thought about this well enough. Let me think through it a bit more and come back to you. In which case I didn't waste the time building things and scraping and cleaning data and doing all that stuff.

The other thing that those frameworks do is, and I try and get analysts to think like this, is utilize each step within those frameworks as touch points back to the person who asked you the question. Hey, this is where we're at. We've collected this kind of data. These are the things we're thinking. These are the features that we're thinking about using. What do you think about that? Anything else you can think of. By doing that, along each step of the way, they get to see the model developed.

They get to provide input. And what that does is it gives them a bit of ownership over it. So when you get to the end result, they're like, geez, this was built exactly in my vision, and now I'm excited to use it. And that's a really cool thing too. Yeah. Yeah. Thanks for that detailed answer, Patrick. I can definitely hear the 10 years of experience working on that. That makes me think about a lot of other things.

Yeah, definitely the same for me, would say, where my personal evolution has been trying to really understand the question the consumer of the model is trying to get to, right? Like what actually is your question? Because you have something in mind, but maybe the way we're talking about it right now and the way I have it in mind is not what you want. And so, yeah, as you were saying, a good model is really that's custom made, that's fine and hard work and that takes time.

so before investing all that time in doing the model, let's actually make sure we align and agree on what we're actually looking at in studying. That's, think it's extremely important. Yeah, no doubt. I think that's often the hardest part, because it's just getting people to really define. that's probably, I mean, that and making sure that you have good data. Those are the two biggest things.

The model building part and things like that sort of happen a little bit easier once you do the first two things. That's always the tough part. Yeah, yeah, yeah. Actually, continuing on that topic, how do you communicate these statistical concepts? And honestly, a lot of them are really complex. So how do you communicate that to non -stats people in your line of work? I'm guessing that would be scouts, as you talked about, coaches, players. How do you make sure they understand?

what you're doing and in the end are able to use it because we talked about that in episode 108 with Paul Sabin. If your model is awesome but not used, it's not very interesting. So yeah, how do you do that? First, trying to really understand what kind of cadence this is going to be on. So some questions. especially in sport, get asked. And they're more asked from the knowledge generation standpoint, meaning that I have a question.

I think it'll help us with, you know, updating our priors, our prior beliefs about the game. Maybe things have changed. Maybe rule changes have altered things or something like that. Can we study this? A question like for knowledge generation requires a different output than something that's like weekly or daily consumption. So if it's for knowledge generation, that's usually communicated in the form of like a short written report.

The question at the top, the bottom line up front, here's the four bullet points, and then the nitty gritty. Like this is how we went about studying it. charts and graphs and usually it's like a page or two and a PDF or maybe like an interactive HTML file that they can see things and have a table of contents and go to different sections.

If the question is directed at stuff that's required to be evaluated weekly or daily, like I need to see this every week because we're going to be evaluating a certain player or an opponent or I need to see this daily because it's player health related, something like that. We're always thinking in terms of like web applications. So how do I get, you now I have to think through the full stack pipeline of like, where do we get the data? Where does it live in the database?

What's the analysis layer? Kick it out to an output. Where's that output stored? And then how does the website ingest that output and make it consumable? And for that, It's usually some form of charts and graphs and a table. And usually it's interactive stuff. So they can sort and filter and hover over points and access the information. And again, as best as possible, I'm always thinking to try and develop that in the way that they're going to use it.

So like I was sitting down, for example, today with our director of player health, and he was like, you I'd love to have this information daily so that I can relay it to the new coaching staff. And I want to say it, you know, say these things. Okay, great. I have all that information. I have all of that, those models, but come over to the whiteboard and draw for me the path that you want to take to going from sitting at your desk.

and reading the information from a webpage to how you want to communicate it. And as soon as he started drawing it out, it's like, okay, I know exactly what to do now. That's perfect. Otherwise I would have built something that in my head I thought would be useful, but maybe not useful to him. And then he uses like part of it or maybe because he's super motivated, he's going to use it.

And he's also going to, use like 10 other things to get the other stuff he wants, but he's a nice guy and he doesn't want to tell me that it doesn't have all the things that he needs. And so then like four weeks later, I walk in his office, I'm like, what are you doing? It's like, oh, I go here and then I get this information from this webpage, but then I go to this other three webpages again. So, whoa, whoa, whoa, why didn't you just tell me that?

Like I'll just, I could make this all into one thing. Like you don't have to, and so. That's a really important piece is knowing how the data is going to be utilized, making sure that it's exactly in the order that the decision maker requires it. Yeah. Awesome points. Yeah. Thanks for that, Patrick.

And I think it's also very valuable to a lot of listeners because we're talking about a professional sports team here, but it is definitely transferable to basically, I think, any company where you're working different people who are using the models but are not themselves producing the models. It's like almost every company out there. yeah, I think and also from my experience doing consulting in a lot of different fields, I can definitely vouch for the things you've touched on here.

yeah, thanks. That's definitely, I think, very valuable. turn back a bit more to the technical stuff because I see time is running and I definitely want to touch a bit more on the spot side of things and how patient stance is applied in the film. Obviously a very important part of your work is, I'm guessing, drafting players, player selection processes. So yeah, how might Bayesian methods be applied here to improve the drafted strategies in the player selection processes?

Yeah, well, again, like I think I said earlier, everybody's going to miss. It's impossible to be, you know... to have a good hit rate and always be picking, you know, picking players who are going to reach high level success. And a lot of that is just because, you know, performance and talent are extremely right -tailed. You know, you have a whole bunch of players that never make it. You have a small group that make it and are good enough to make it.

You have an even smaller group that are good enough to make it and like really good to play all the time. And then you have a few Hall of Famers sprinkled in, right? So it's really right -tailed. it is very hard to do this stuff. So, you know, understanding or modeling your uncertainty, that's really important. And information from the domain experts, know, scouts see things on film that we can't see in numbers and vice versa.

One of the values that we have is we can process way more players than any one human can actually watch. So we have the ability to build models that can identify players and hopefully get them, over to the domain experts who have to then watch the film and write the reports and say like, hey, did you know this guy was really good in these things? This is his potential ceiling. And we think that we have, you know, we think that this would be valuable for our team, right?

Building models like that, that help us. Identify talent, give us a range of plausible outcomes. One, it helps us get information to the people who have to watch the film and make the decisions. Two, it helps us have discussions about where the appropriate time to acquire people If you're sitting there, obviously, you know, in the major league draft, major league baseball draft, it would be the same thing. Everybody knows who the first round picks are and the second round picks.

It's after that, that things become pretty sparse. And if you can identify players that have unique abilities later in the draft, that opens up a lot of opportunities to, select players that might be able to contribute successfully to your team. And so that's really where those models help us. The other area that they help us in is, I always talk about with our analysts, like, what is the benchmark that you're trying to beat? So every model, like you can't just build a model.

I mean, I remember one of our analysts, she had a model and she said, I built a model and I think it's really good. And I said, cool. How well does it do against the benchmark? She's like, well, what do you mean? And I was like, well, like how well does it do against if we just use, let's say scout grades or if we just use public perception, how well does it do historically against that? She's like, no, no, no, I don't care about that.

Like this model is just with their stats and you You know, it's like, no, no, but you have to care about that because if it's not better than those things, then why would we use it? Right? You have to be able to beat that benchmark. One of the areas where we can really beat a benchmark is when we combine the domain experts information with the actual observed data information. And a Bayesian model allows us to do that, right?

It allows us to take down the domain expert who's maybe scoring the player a certain way, writing information about the player. It allows us to take that information. mix it with the numbers and get a model that is, I guess, man and machine, right? And those models beat our benchmark much better than any one of these alone, right? If we just use numbers, never watched any film, never knew anything about the player, or if we just use domain expert information.

When we combine those things, we tend to do a much better job. And so that's where Bayesian analysis really helps us. And also, That's where you start to get interesting discussions about the floor and the ceiling of a player. Because now once you run their posterior distribution and the domain experts information is in there and you're saying, yeah, this guy, he's awesome at tackling and he'll be a great tackler and blah, blah, blah. And these are his numbers.

the numeric model says like, yeah, I think this guy's a pretty good tackler. Domain experts saying like, no, no, no, I watched him and he doesn't play against great competition, but his technique is really bad. It's not going to translate against these bigger players. It's like, well, that's not information that maybe our stats would have. But when we combine those two bits of information, all of a sudden, our maybe overly bullish belief in this player gets brought down a bit.

And utilizing the information like that is interesting and it also makes it unique to the people that are in that room, the domain experts that you have in that room and things like that. How you weight those things is really important. For our own analytics staff, we'll do things like we'll build our own separate models and have our own meetings and we'll build our own analysis.

So we'll have independent models all against each other and maybe we'll have them weighted or we'll use you know, like triangle prior and build them together and, you know, mix them together and get posterior simulations. And we try and do those things in a way that allows us to understand all the plausible outcomes that might be relevant for this individual. It's fascinating. Yeah. And I really love both that feel the fact that you have to blend a lot of different information.

Like the domain knowledge from the scouts, the benchmark from the markets, the models that you have in house, also scientific knowledge of all the scientists that the team has inside of it. that makes all that much more complicated, right? I'm guessing sometimes as the modeler, you would probably be like, my God, that'd be so much easier if we could just run some very big neural network and that'd be done.

at the same time, I think it's what makes the thrill of that field, at least for me, is that, no, that stuff is really hard. There is a lot of randomness. There is a lot of things we don't really understand either. And you have to blend all of these elements together to try and make the best decisions you can, even though you know you're not making the optimal decisions, as you are saying. And I think it's a fascinating field to study important decision -making under uncertainty. Yeah, for sure.

I think that's the thing that's most interesting about it to me. Like, yeah, I think that's the most that stuff is fascinating just knowing Yeah, Decision making under uncertainty is really challenging and I think that's the thing that makes this the most, you know, the coolest stuff to work on. Yeah, yeah, no, definitely. Actually, maybe a last question on the technical side. Now if we look, so we've talked about the beginning of the career of a player, right? Like the draft.

We've talked about... kind of the whole lifetime of the player, which is projection, performance projection over the whole career. Now I'm wondering about the day -to -day stuff. What can Bayesian models tell us here or how can they help us in predicting the impact of training loads on the athletes' wellbeing and performance? I know, I think it's kind of a frontier almost all the sports, but I'm curious what the state of the art here is, especially in US football.

Yeah, it really is, I think, the sort of one of the final frontiers, I guess, in sport. Team sport is just challenging because you perform well or you win or you lose due to a whole bunch of issues that sometimes have nothing to do with you. For example, I can train you, you know, we could train you and you could be very fit and strong. And if in the last play of the game, the quarterback throws the ball to a patch of grass and you lose, it had nothing to do with you being fit and strong.

know, counter that to like individual sport athletes. If you're a 400 meter runner, a cyclist, a swimmer, a runner, a marathoner, you know, physiologically. If we build you up, we have a much more direct line between how you develop and how it directly relates to your performance. There's not a lot of other information there. No one's trying to tackle you on the bike or in the pool or something like that. So that makes, that makes a sport much more difficult.

Baseball is probably the closest because even though it is a team It really is this sort of zero sum duel between a pitcher and a batter. And one guy wins and one guy loses. And the events are very discreet. The states of the game have been played out, know, runner on first and second with two outs, bottom of the third, blah, blah, blah. So it's maybe a little bit more clear in baseball. I think in the other team sports, in the kind of invasion sports, what makes this challenging is identifying.

I always try and take it back to identifying the discrete events that we're trying to, trying to maybe measure against. like, for example, I can give you example, a pretty clear example from basketball. was talking with a friend in a, in an NBA team and, he was like, yeah, you know, our, our, our coach and our scouts and the, you know, coaches, feel like our players don't close out three pointers fast enough. And I was like, well, is that a tactical problem or is it a physical problem?

And he's like, well, how would we look at that? And I was like, you have the player tracking data. And if you know every time your team's on defense, which is easy to know, and you know every three pointer that's been shot against your defense, if you were to take that frame, out of the player tracking data and maybe like the frame a second to a second and a half before that. So all of that information for every one of those three pointers.

You have an idea of the relationship between your player and the player who's taking the three point shot. You have an idea of the relationship between your player and the other players on his team. So you know from a technical, a tactical standpoint. you know what type of like formation or defense you're trying to run. So first things first, are the players in the right position to close out that three pointer? Maybe, you know what?

Our guys consistently mess up the defensive shape and when they get in there, they give too much ground to the guy shooting a three pointer. The other is the physical standpoint of, well, no, they're in good position, but when they go to close it out over that second and a half, They're not fast enough to get there. Okay, great. Now roll it back to what you can measure in the gym.

Is there some measure, let's say on a force plate of the amount of impulse or force under the force time curve that the player outputs that can tell us something about their ability to move rapidly, apply force into the ground, move rapidly to close out that three pointer? And maybe if you look at several years worth of data, you'd find The top players on your team all do this thing really well, and some of the worst players at closing out the three do this thing poorly.

And so now you have something to say about like, hey, what if we develop this quality in the off season and our players, would we be able to close out the three pointers more effectively, more efficiently? And so I think from that standpoint, linking the development piece to sport, team sport, invasion sport. You have to really think about the discrete events of the game and how you can kind of tease those out of, let's say the player tracking data.

And it's like super hard in something like, you know, in football, because players all do really different things. You know, the linebacker does something totally different than the offensive lineman. And so you have to really get down to the, the domain of each of those positions and say like, gosh, what are the discrete events? that define what this position does, then how do we measure success in those?

And then if we can measure success, how do we identify the archetype of players who are good at those things? And then if we can do that, maybe then we can start to talk about, is this something that you can develop in a player? Is it something that you have to identify in a player? That's sort of the, in my head, I mean, I don't know, I could be wrong. This is not. Nobody, think everybody's trying to figure this out, but I could be wrong.

But in my head, that's at least the process that I would, you know, I try and think through when I think about these things. Yeah. Yeah. It makes a ton of sense. mean, and it seems like, yeah, that, and there are so many areas, open areas of research on all of that stuff. That's just, just fascinating. I'm I'm already thinking, that'd be amazing to have a huge patient model where you have all of those topics that we've talked about.

Basically, it could be a big patient model where you have a bunch of likelihoods. And yeah, that'd be super fun. I'm guessing we're still a bit far from that, but maybe not too far. Hopefully in a few years, that'd be definitely super fun. Yeah, no doubt. Yeah, I mean, and that's, definitely doable. But yeah, you need you need really good data and you need really good structure in your model. Yeah, that's the part too, is getting getting good data, know, player tracking data is fine.

I mean, it has errors, you know, people who think that it's like a panacea, you know, it's like, have you really worked with it? I mean, there's Sampling at 10 hertz for humans that move really, really fast. Acceleration is a derivative of speed. At 10 hertz, people who are moving really fast, that data gets noisy pretty quick. I think one of the things is as we progress, as the technology keeps improving, things get better.

you get better data and maybe that helps you also answer some of these questions a little bit more specifically. yeah. And then we'll be able to have our huge patient model with a lot of different likelihoods in there that fit into each other. And then we don't even need to play the game. We don't have to play the game. They just let the computers play the game and it's over. We're done. Yeah, no. No, you still have to play the game because you still have randomness. Then you're like, yeah.

mean, because otherwise the model is kind of like if you want kind of a quantum state, right? Where the model can see the probabilities of things happening, but then you have to open the box and see what is actually happening. So you can have the best model. In the end, you still have to play the game to see what's going to happen because it's not deterministic. Yeah, thankfully. yeah, that's right. Yeah. Yeah. But I mean, it's definitely I always love doing these these big models.

And that's definitely doable. I've done that for election forecasting, for instance, where you have several likelihoods, one for polls, for instance, and one for elections. So yeah, that's I know that's definitely doable in the Bayesian framework, because I mean, why not? It's just part of the big of the same big model in a directed S -secret graph, if you want. But yeah, I'm curious to see that done in spots.

Maybe we'll get back together for another episode, Patrick, where we talk about that and how we did that. That'd be cool. Yeah, there you go. Yeah, actually, I wanted to ask you to close us out here. about, you you've started talking about that right now, like some emerging trends in sports analytics that you believe will significantly impact how teams manage training, performance, drafting in the near future. And also if there are any spots you see as more promising than others.

well, mean, yeah, trends. Yeah. We talked a lot about that stuff and I think, you know, better data and better, you know, better technology. all of those things will, will, I think will help us. I think also it's getting, you know, getting the decision makers comfortable with the utility of some of this stuff, you know, baseball, has always been a game of numbers. And, I think early.

maybe mid 2000s, early 2004, five, six, seven, you know, releasing data kind of to the public, really the first sport to get player tracking data, things like that.

I think that opened up a lot of opportunities for people to do really interesting work in the public space, which then sort of got teams interested and then sort of a, you know, more of a shift in people in the front office where, maybe historically it was ex players who kind of played out until they retired and then became scouts and managers and things like that. I think that, you know, that happening in baseball was a really good thing for that sport.

And I think slowly for the other sports, that's really probably needs to happen because the more that these things are open and sort of curbside, I think the more the decision makers become comfortable with them and can say like, I can see how I would use this. I can see what this might help me with. so I think that's never underestimate the work that you do in the public space because I think there's an opportunity to always. you know, help things evolve, crowdsourcing, guess.

Yeah, mean, preaching to the choir here. Yeah, for me, a lot more of these data would be open sourced. Yeah, I mean, there is also an extremely interesting trend right now towards open sourcing more and more parts of large language models. I think that's going to be extremely interesting to see that develop because At the same time, this is very hard because these kind of models are just so huge. You need a lot of computing power to make them run.

So I don't know how open source can help in that, but I know how open source can help in the development and sustainability and trustworthiness and openness of all that stuff. So that's going to be super interesting. And I'm also going to be very interested in the different spots evolve. Now that basically the nerds are they are much more right than before.

know, so like, probably baseball is going to be at the forefront of that because they just have a lot of, of, know, advanced in years compared to the other sports. So it's going to be interesting to see how things plays out here when it comes to data.

Because at the same Not sure it makes a lot of sense for all the clubs to have their own data collection structure if in the end they just have the same data because you're mainly, I think, to gather data, you are limited, I'm guessing, by the technology much more than by the ideas of a coach or manager or a scientist being like, I data, I think in the end, the data collection is something that can be pretty much, you know, collective, but then how you use the data is more the appropriate

proprietary stuff. It's going to be interesting to see that out. Yeah, no doubt. Great. Well, Patrick, I've taken a lot of your time already. I need to let you go because... You need to drink some coffee. definitely need to because that was very intense. But man, so interesting. Before letting you go, so I have the last two questions, of course, as usual.

You told me before we started the show that when the season is going to start again for you in US football, your days are going to be extremely busy. Like basically working from 5 a .m. to 10 p .m. or something like that. How is that possible? when do you sleep? We do have some long days. It depends on the day of the week and when the full practice days are. Usually, yeah, I'd get in around 4, 45 or 5, have a bit of a workout, and then kind of start the day around 6, 30 or 7.

And it's really long. I mean, there's a ton of meetings. It's a very tactical sport if you've ever watched it. And so the players are nonstop in and out of meetings and walk through practices and full practices and then more meetings. it's all a big, you know, tactical pattern recognition type of thing. And so, you know, we're in, you know, working on projects and data and getting you know, things set up so that model set up and identifying things in data for the staff and things like that.

it just becomes this really long day. And I mean, like, yeah, if we go home at eight or nine, maybe 930 sometimes, maybe 10, but I mean, there's people there that'll stay even later than that, just going through film and watching it. They are very long days. Usually those types of days are about three days a week and then the other days, I might be in there at five and get out at like five or six. So still 12 hour days, but it's a long week for sure. This is brutal.

Yeah. But is it like that during the whole season or is that mainly the start of the season? No, that's the season. That is 18 weeks later we have a bye, 17 games this season. Damn, impressive. You have to be sharp with your sleep also, I guess in these weeks. You do, yes. You try and catch up on the weekends. Yeah, damn. Awesome, well Patrick, I think it's time to call it a show. Thank you so much, that was amazing.

Of course, I'm going to ask you the last two questions, ask every guest at the end of the show. You knew that was coming, right? Yes. So what's the first one? You know the first one. The first one is, if unlimited resources, what problem would you solve? Yeah, unlimited time and resources. I'll take one outside of sport, but one I witnessed in sport. so when I first started, I used to do all of the GPS stuff, like live on the field.

Now someone else does it, but coding it or cutting it up and stuff like that during practice. And on Friday practices at the time, that was the day for our make -a -wish, the make -a -wish child. So they'd have kids that had make a wish and their wish was to see a practice and meet their favorite NFL players. And these were usually kids that were, you know, were small and terminally ill.

I think the, that's probably the thing that I would solve because standing there and you watch that and you work with all these guys that are healthy and young. And then you see this little kid who never have a chance to healthy and young, but they're just so happy to meet these guys. I think like that's a super unfair thing for those little kids. if I could solve anything, it'd be like that, you know, kids and cancer and stuff like that. I think it's just a horrible thing.

And then your second question is always, I could have dinner with anyone dead or alive, who would it be? There's so many good ones, but I think I would pick... a previous guest that you've had, I think three times, if I'm correct, which is Andrew Gellman. I think he's fascinatingly interesting and I think dinner would be pretty amazing. Yeah. yeah. Both good choices, amazing answers. Thanks, Patrick. I can tell your faithful listeners because they're like, yeah, you knew the questions.

Like you're taking my job basically, I can see that. No, that's great. So Andrew, if you're listening, well, if you're ever in New York, Patrick will try and make that work. That'd be fun for sure. Yeah, Andrew is always fantastic to talk to. So yeah, that's definitely a great choice. Awesome. Well, that's it, Patrick. Thank you so much for being in the show. I really had a blast and learned a lot about US football because I that's, I think that's not the sport I know most about.

So definitely thank you so much for taking the time. We'll put resources to your website in the show notes for those who want to dig deeper. have a bunch of links over there and Thank you again, Patrick, for taking the time and being on the show. Thank you. This has been another episode of Learning Bayesian Statistics. Be sure to rate, review, and follow the show on your favorite podcatcher, and

visit learnbaystats .com for more resources about today's topics, as well as access to more episodes to help you reach true Bayesian state of mind. That's learnbaystats .com. Our theme music is Good Bayesian by Baba Brinkman, fit MC Lars and Meghiraam. Check out his awesome work at bababrinkman .com. I'm your host. Alex Andorra. You can follow me on Twitter at Alex underscore Andorra like the country. You can support the show and unlock exclusive benefits by visiting Patreon

.com slash LearnBasedDance. Thank you so much for listening and for your support. You're truly a good Bayesian. Change your predictions after taking information in and if you're thinking of me less than amazing, let's adjust those expectations. me show you how to be a good Bayesian Change calculations after taking fresh data in Those predictions that your brain is making Let's get them on a solid foundation

Transcript source: Provided by creator in RSS feed: download file