480: Ethical Uses of AI in Law School (w/Professor Susan Tanner) - podcast episode cover

480: Ethical Uses of AI in Law School (w/Professor Susan Tanner)

Nov 25, 202430 minEp. 480
--:--
--:--
Listen in podcast apps:

Episode description

Welcome back to the Law School Toolbox podcast! In this episode, we're speaking with Susan Tanner, Assistant Professor of Law at the University of Louisville. We explore different practical uses of AI by law students, some of which are beneficial and some that might be problematic.

In this episode we discuss:

  • Susan's professional background
  • How law students are currently using generative AI
  • Applications of AI for legal practitioners and law professors
  • Practical uses of generative AI for law students and related ethical considerations
  • How to start utilizing AI if you're new to it

Resources

Download the Transcript 
(https://lawschooltoolbox.com/episode-480-ethical-uses-of-ai-in-law-school-w-susan-tanner/)

If you enjoy the podcast, we'd love a nice review and/or rating on Apple Podcasts (https://itunes.apple.com/us/podcast/law-school-toolbox-podcast/id1027603976) or your favorite listening app. And feel free to reach out to us directly. You can always reach us via the contact form on the Law School Toolbox website (http://lawschooltoolbox.com/contact). If you're concerned about the bar exam, check out our sister site, the Bar Exam Toolbox (http://barexamtoolbox.com/). You can also sign up for our weekly podcast newsletter (https://lawschooltoolbox.com/get-law-school-podcast-updates/) to make sure you never miss an episode!

Thanks for listening!

Alison & Lee

Transcript

Alison Monahan

Welcome back to the Law School Toolbox podcast. Today we're excited to have law professor Susan Tanner here with us to talk about the ways law students can use AI ethically. Your Law School Toolbox host is Alison Monahan, and typically, I'm with Lee Burgess. We're here to demystify the law school and early legal career experience, so that you'll be the best law student and lawyer you can be.

Together, we're the co-creators of the Law School Toolbox, the Bar Exam Toolbox, and the career-related website CareerDicta. I also run The Girl's Guide to Law School. If you enjoy the show, please leave a review or rating on your favorite listening app. And if you have any questions, don't hesitate to reach out to us. You can always reach us via the contact form on LawSchoolToolbox. com, and we would love to hear from you.

And you can check out our Bar Exam Toolbox podcast if the bar exam is on your radar. And with that, let's get started. Welcome back to the Law School Toolbox podcast. Today we're excited to have law professor Susan Tanner here with us to talk about ways law students can use AI ethically. I'm super excited about this, so welcome, Susan.

Susan Tanner

Thank you.

Alison Monahan

Well, to start us off, can you just give listeners some information about your background and your work, just so they have some context here?

Susan Tanner

Sure. So, I'm currently an assistant professor of law at the University of Louisville Brandeis School of Law. But before I started teaching, I went to law school, then I went back and I got my PhD in Rhetoric from Carnegie Mellon. If you know anything about CMU, you know it's a very sort of techie school. And so my PhD was in Rhetoric, but I also worked with folks on my dissertation who did corpus and computational linguistics, machine learning, and natural language processing.

And so, my dissertation used all of these methods to analyze legal language. So, I have this weird background, quasi techie, very language-based, sort of sociolinguistics background, and then when ChatGPT started making headlines a few years ago, I was one of the first people to sign up for an account and start using it. And then I did find out that my background was useful for learning how to explain what was going on to non-computer experts, right?

So I'm not the machine learning expert, but I work with machine learning enough to sort of know basically how the processes go, and especially know what it's doing with the language. And so, I started doing some research in the area, I started talking with folks about it, and then it became actually sort of a bigger part of my research area after that.

Alison Monahan

Well, it sounds like your timing was amazing. If people want to learn more about you or reach out, how can they do that?

Susan Tanner

So, I'm a bit old school. I have my university website, University of Louisville, and you can just look up Susan Tanner. Or LinkedIn, backslash Susan Tanner, are probably the two best ways to reach me.

Alison Monahan

Awesome. I will connect with you, we will link to you. Before we really jump in, I want to touch on something that you just mentioned, which I think has a lot of buzz around it, but people may not actually know what it is. What is ChatGPT and all these other similar systems? What are they? What are they doing? Why are they useful?

Susan Tanner

So, the ones that we're talking about a lot, we've been talking about generative AI. In other words, there're all sorts of AI. I don't want to bore everyone with all of the AI, but there's generative, there's predictive, and there's extractive AI. And the thing that's been getting a lot of news lately has been this generative AI. So, things like ChatGPT are working from a large language model, at least the processes that I'm going to talk about.

And so, what they do is they take a large corpora of language and extract patterns from it. And so, after they've extracted those patterns, they can predict and they can generate language patterns that really model the patterns that they've noticed before. This is actually how all machine learning works - it looks for patterns in data. When I was working on my dissertation, I fed into it a corpora of legal texts, and it would just sort of pop out some information.

It would categorize different texts in different ways so that I could understand something from the large data set that I wouldn't have been able to understand from reading each individual text. And so, ChatGPT does this, but pardon the sort of metaphor, but it does it on steroids. So, it's doing everything that I did, but better and faster, and in real time. I like to use this sort of explanation to show how ChatGPT works. It goes through a series of encoding and decoding.

In other words, that categorization, coding language based on patterns and then decoding it and sort of replicating it. But it all does this through language and not through any knowledge of how the world works. And this is one of those places where ChatGPT has also been making headlines, because it does things like hallucinate. And I always like to say that the hallucinations are a feature and not a bug. In other words, what we're asking it to do is be creative.

We're asking it to give us an answer, even if there's no answer that already exists out there. If we wanted an answer that already existed, we would just go to Google to look for that answer. And so, ChatGPT is supposed to, by design, make things up. But when you're expecting it to act like Google, you're going to be disappointed in the results, because it'll often give you inaccurate results.

Alison Monahan

Right. And I know we've all read about lawyers who've gone... I saw another one the other day in Australia, who'd just sort of go to a chat bot and say, "Oh, give me case law." And it turns out the case law is made up. Important for people to understand that you might get results that are not actually real.

Susan Tanner

So, it's really funny. I'm starting to back off on this; for the longest time though, I said I thought of generative AI not as a research tool, but as a writing tool. And so I just never got into trouble, because I didn't expect it to do any good research for me. It is actually becoming better at the research part, partially because everyone's building in some sort of an extractive AI as well, so it actually is able to give you real results and know that they're real results.

But I do think that if we're expecting it to do research for us, we're going to be sorely disappointed.

Alison Monahan

Alright. Well, tell me a little bit about how you see law students using generative AI in law school right now. What are people doing with this?

Susan Tanner

So, people are doing all sorts of things - some things great, some things a little problematic. I've seen law students do everything from ask ChatGPT to write an entire memo or brief for them and turn that in, right? So, not what I would suggest. But also, I've been working with my students with it for about two and a half years now, and so I've seen it transform over that time. So, two and a half years ago, it was real sort of small, we were working with it in little bits and pieces.

And now, I work with 1Ls and my students come to me already having used it for a year or two increasingly. And so, they're using it from everything from the mundane task to, as far as law school preparation, briefing cases for them, creating study schedules, helping with outlines, helping with notes, and also just helping drafts, right? Helping with outlines or drafting initial drafts of work that they're going to be turning in.

Alison Monahan

Nice. And have you seen either the school or people that you've just heard about actually using this in their jobs, or do you think that's still a little down the road?

Susan Tanner

So, I work with practitioners. I talk with and work with practitioners. And most people I talk to have already integrated into their practice in some way or another.

Alison Monahan

Wow. Yeah.

Susan Tanner

So, either very formally because their law firms have some sort of a bespoke AI that they've already been using, or as smaller or solo practitioners, some folks have been just using off-the-shelf ChatGPT to help them write letters, to help them draft correspondence or memos, or even help them with briefs. Now, I will say I might be talking with a very self-selected group of people, right?

Alison Monahan

Fair.

Susan Tanner

But the people who come to talk to me just say that they've integrated it into their daily lives. I do also, because I do CLEs on AI, I regularly talk with groups of people who have never even tried it, and never want to learn it. And so, there's that group of practitioners as well, but I think... Oh, and I was also going to say that I also work with law professors, who I don't think have adopted it at the same rate as practitioners have. They tend to be a little bit more hesitant.

And I can understand why they don't have a reason to do it, because they're not working with clients. They're not getting pushback on their billing, and so there might not be as big of a reason to adopt it so quickly. That group of legal professionals, I don't think has been adopting it quite at the rate of legal practitioners.

Alison Monahan

Interesting. Yeah, I'm interested to hear that so many lawyers seem to have picked it up. So, what do you think are some kind of good or bad ways, just generally, to use AI or to think about AI as a law student? I mean, some of the things you mentioned, I was kind of like, "Okay, that makes sense." Other ones like, "Hmm, I don't know, should you really be using it for your legal writing memo?"

Susan Tanner

I have a friend who says that law students will go around the world to take a shortcut, right?

Alison Monahan

I like

Susan Tanner

that. Right, that everyone's just sort of looking for some way to shortcut, and sometimes it's not very helpful. And so there are ways that you could be using generative AI to shortcut your education that would also shortcut you out of learning. And that's the sort of thing that I would never encourage anyone do. That would be things like writing a memo, right? I tell my students all the time, it's not the product of the memo that's going to be important for you to write.

It's the process of learning how to write a memo that is going to be important for you. And so, having someone or something else draft your memo is not going to help you learn how to write a memo. I also will say the ways that students are currently using it that I think doesn't help with the learning are old methods, right? So, in other words, it's not a new thing to have someone else write a memo for you and just turn it in. That's been going on for years.

Obviously, most people don't do it, but it does happen occasionally. I think another problematic thing is having it do too much of your note-taking or outlining or summarization for you, because again, when I talk to my students about outlines, I always tell them I like to think about outlining as a verb. So, in other words, one outlines their work. It doesn't really matter so much that you have a good outline.

You could very easily ask ChatGPT to write you a great outline based on your notes, but it's that process of outlining that's helpful. And so, this is one of those places where I think it can be used to shortcut learning. Now there are all sorts of ways the busy work that happens with academia that I think generative AI can actually help students.

For example, it takes a while to be able to put together your weekly schedule to figure out when you have things to do, to understand what's being asked of you in a particular assignment. And so, one thing that I've had my students do is I often give very lengthy assignments, I have very lengthy syllabi. And so, I'll just have them upload something that I've given them and ask questions about what that assignment is doing for them, right?

So, my students tend to miss important things in assignments I've given them, and ChatGPT doesn't seem to have missed those same important things.

Alison Monahan

What would be an example of that, like a deadline? I mean, I'm assuming they look at

Susan Tanner

that. Yes, so I was actually just talking with a student today. I have them do this capstone assignment, and hidden within it is a suggested template, and one of my students couldn't find it, because it's a link and it's probably not the easiest thing in the world to find. But ChatGPT could very easily find that I have language that says, "If you'd like, you can use this attached template to start off." And so, it's really those sort of procedural things that it's great with.

I've also asked them to write their own checklists. So, sometimes I'll give them a checklist of, "Make sure you have this and this", but ChatGPT can actually make you a checklist based on an assignment, so that you can make sure before you turn it in you've accounted for everything that was asked for.

Alison Monahan

Huh, that's super interesting. That does seem like it would be useful. And do you feel like in general obviously you haven't talked to everyone - but are law professors and schools open to people using this in those sorts of ways if it's not like, "Write my memo for me"? Or are they still very resistant?

Susan Tanner

I talk to folks on both sides, right? So I'm on a couple of committees for technology or specific to AI. And obviously, those professors who sit on those committees with me are very open to AI. I was recently on a university-wide committee where we had to talk about what is acceptable at our university for student use of AI. And the professors who joined that committee were obviously a little bit more open to technology. So, there are a handful.

But I would say most of the people I talk to, most of the professors I talk to, are very resistant to it for I think a very good reason, which is that same thing that we were just talking about, which is, generative AI could be used to substitute for learning. It could also sort of hide when students aren't learning, right?

And so, one of the reasons that we have grades, one of the reasons that we have assignments is to check in with our students to make sure that they're learning what they need to be learning. And a good draft or a good response from generative AI can make it look like a student is learning what they need to learn, when they're really not.

Because there are deeper learning needs, a lot of professors have been very hesitant, and this is even true of professors who use it in their own work, that they see a difference potentially between having students use it at an early level - when they're a 1L or a 2L, versus someone who already knows what they're doing and can coach the generative AI, and aren't really trying to learn through that process.

Alison Monahan

Right. I mean, I think it's similar to sort of canned briefs and that type of thing that's like, okay, you can get what somebody thought were the five main points of this case, but that's different from you sitting down, reading the case, absorbing through osmosis the way that legal writing is structured, and also deciding that really you think there are six key points, and your professor happens to agree or not agree with you.

If you short circuit that process, whether it's buying a canned brief or having ChatGPT do it for you, of course you're not learning that process.

Susan Tanner

Absolutely. And I also think there's this desire to have easy answers, which that is not the point of law school, right?

Alison Monahan

Right. I feel like the point of law school is actually getting you to understand that if there was an easy answer, no one would be paying you this amount of money to research the question. We deal in the ambiguity. We don't deal in the cut and dried, or no one would pay for it. You would just look it up.

Susan Tanner

And that's one of those things that, unless you're prompting it in a particular way, generative AI is not great at pointing out the fact that there is ambiguity, right? And this is actually one of those places I do research into how good the responses are. And this is one of those places where I think necessarily it's always going to lag behind human reasoning.

It's not just that there's ambiguity in the law, but it's also that the law is defined by argument and this sort of human rationality, where we have to decide what an audience, a particular judge, is willing to accept, right? And there's going to be a disconnect between the language that generative AI is trained on and making these novel arguments that are potentially able to be picked up in a case or in an opinion, and understanding what arguments are sort of hearable.

And so, I think the fact that generative AI is not good at this actually should give lawyers a little bit of hope for the future, that there's always going to be this sort of human element that goes into it, understanding who your audience is, understanding how to structure an argument, so that someone will sort of buy into it. And sometimes that's embracing the ambiguity, sometimes it's talking through it.

And again, these are the sorts of things that the new versions of, for example, ChatGPT are getting better at, but I think there's always going to be a little bit of irrationality that goes into making the law, and I don't think that's what we're training these large language models to embrace irrationality.

Alison Monahan

Interesting. Yeah, I also found - maybe, gosh, it's been probably a year ago at this point I was playing with Claude and just asking it some questions and having it give me some hypos, like I was a law student to practice on. And I think it's actually in a lot of ways really good at that, but it made this really fundamental error at one point. And I realized it doesn't have logic. It doesn't understand that there's the plaintiff and the defendant, and that matters.

But it was that moment of like, "This doesn't seem right. What's wrong with this?" I'm like, "Oh yeah, this is not a mistake any 1L would ever have made."

Susan Tanner

Right. And I think that's actually a great lesson, especially because I'm a little bit of an optimist when it comes to this, but I love that feature. I love the fact that it can be wrong, because again, if we embrace ambiguity, embrace the fact that part of our role as advocates is to argue and think about multiple sides. I love the fact that you can't trust the answers.

And in fact, in some ways, I almost like this a little bit better than some of the hornbooks or what have you, where you know you can trust the answers, and our instinct is just to let someone else tell us what the right answer is and not work through that struggle of trying to figure it out for ourselves.

And so, having ChatGPT create hypos for you, or even potentially do a back and forth Q&A, have it create some multiple-choice questions - as long as students know that the technology is fallible, that it could be wrong about things, but in that way, it's much more like a study group, where the point of joining a study group in law school is not to find that one person who really knows what they're doing and understands what's going on in every class, and just learning from that person.

It's to talk through things. It's because everyone's going to get some things right and some things wrong, and just checking back in. And I think that's one of those ways that ChatGPT can actually be really helpful.

Alison Monahan

Yeah, I agree. I think developing that gut level of like, "Huh, really? That doesn't seem right", and then diving into it, is actually a great learning strategy. The problem of course is if people have no context, they just might think like, "Oh well, ChatGPT told me, so it must be right." But it may or may not be right.

Susan Tanner

And I think that's one of the problems with your first year of law school too, because you're so used to being wrong that you're willing to accept what anyone else or any other thing will tell you is right. And I do think it's hard to develop through that that first year, where you're just always wrong on a daily basis, learning how to trust yourself.

That's actually maybe where when we talk at our law school about potentially limiting exposure to students for generative AI in the first year, because again, students don't really have enough context to understand when it's giving them good advice, when it's giving them bad advice, and everyone just wants to take any advice that they're given, right?

Alison Monahan

Yeah. I talk with a lot of 1Ls, and sometimes they come to us for tutoring because they've realized their study group is kind of like ChatGPT. They're like, "These people are really convinced they're right, but I don't know. And they don't know. And maybe you guys could actually be the ones to tell me if something is right or not right, and help me understand why." And we're like, "Yeah, we can do that. We're experts, actually."

Susan Tanner

Well, exactly right. I think it's really important to know that there are times to go to experts and there are times to figure it out. And to know the difference in the advice that you are getting right and where you're getting this from. I say the same thing about students who come to me. Sometimes I want them to come and talk to me and flesh out some ideas. And sometimes I want them to go talk to someone who doesn't know any more than they do and try and figure it out for themselves.

Alison Monahan

Right, because that's a useful skill set in life, and in legal practice. There's not always going to be a written answer, because we're developing new case law literally daily. Alright, before we wrap up, let's talk through a few scenarios. And some of these already touched on a little bit, but I just want to dive into and get a little more detail.

So

I ran out of time. I was trying to make an outline, I was going to study off of it right before midterms. And I asked ChatGPT to just make it for me. I said, "I'm taking Torts. These are the topics we've studied. Can you make me an outline?" It seems pretty solid. Should I maybe just do this for all my classes from now on and not waste my own time making them?

Susan Tanner

So, I have a couple of responses to that. The first one is that law students often feel overwhelmed and are looking for ways to save time. So, as I mentioned, I work with 1L during the fall and my students always tell me that they've decided it's not worth their time to read for class or to outline for exams.

Alison Monahan

We hear these.

Susan Tanner

And until you've taken your first exam, you really don't know how much of a problem it's going to be if you don't read for class and you don't do your own outlines. For those students, I think doing something is better than nothing. The same thing with my students who ask me all the time about commercial outlines. Something is better than nothing, but I think it's very far from the perfect solution, because as I mentioned, I do think of outlining as a verb.

It's that process of outlining that you make connections, synthesize rules, and understand the big picture. And there's also this term in educational psychology about productive struggle, that actually just taking a little bit more time to learn and think about things is actually very useful. And so, a shortcut through the learning process is not going to help anyone really digest and understand that.

There was some research a while back about, for example, the importance of handwriting notes, because it takes a little bit longer, right? There's research back and forth about that. Or writing your own study aids sometimes can be helpful. So, sometimes just taking a little bit longer with the material is very useful.

And so, anything that shortcuts the time that you spend with the material is probably not great, but if you have five minutes to study, I guess five minutes is better than no minutes.

Alison Monahan

Right. I can see it being useful. Personally, the outlining, as a big written document never really worked well for me, so I made a bunch of flowcharts, but I can see somebody going and saying, "Okay, I want to do some practice, but I just don't feel like I have a handle on what we've done. Okay, fine." Get a commercial outline on negligence, have ChatGPT write down the elements, but then actually apply them. Something like that, I can see being an okay shortcut.

I'm not encouraging it, but I'd rather have someone do that than do nothing.

Susan Tanner

Absolutely. And I think that's exactly right. I love...One of the uses I love for ChatGPT is to take something in one form and then change it into a form that I want. So, that could be anything from taking the notes that you've written in class and helping you organize those. Or, I love your idea of... I'm also a flowchart person, but I can't always figure out what the flowchart should look like. In other words, I know there could be a flowchart but I can't quite make it myself.

And I've actually been experimenting with this pretty recently, about having generative AI help me design flowcharts from outlines. I've also had it do things like suggest alternate ways to organize my outline. It's almost like you're asking it for feedback, rather than you're asking it for the right answer.

Alison Monahan

Yeah. I can see something like if somebody has an outline and they want to make it shorter, if they need like a two-page cheat sheet or one of those things that sometimes professors let them bring in, that could be a good starting point. Yeah. I also find, generally speaking, that these tools are much more useful if you give them something and tell them to transform it, versus just saying like, "Give me this", which could be anything. It might be made up, all of these things.

Yeah, one of the things that I was going to ask about... So, another scenario: I need to apply for summer jobs and I hate writing cover letters. Can Gen AI help me with this? I think the answer is probably "Yes". What do you think?

Susan Tanner

Absolutely. This is actually one of my favorite uses of generative AI. I do it every day as a law professor, because I have to write a lot of letters, but also, if we sort of understand a little bit about how generative AI works, we also understand that because it's built on this huge corpus of documents and language, it understands the genre. It understands what a letter looks like better than someone who's writing their very first cover letter ever.

Alison Monahan

For sure. Yeah, it has all the knowledge about what your LinkedIn profile should look like, your cover letter should look like. All of it. It's there.

Susan Tanner

Yes. I don't know about you, but when I had to sit down and write my first set of cover letters for... We applied to so many jobs as law students, right? I just remember thinking I was just sending out so many different cover letters, and I never personalized them when I was a law student. I would for a couple, but I sent out this very generic cover letter and this was against all the advice I gave. All the advice I was given was, "Never send out a generic cover letter."

But I just didn't have time to do anything else.

Alison Monahan

Right. I remember as a 1L applying to in December 1st, 100 probably firm jobs. And those were all generic, like change the address, print it out, put it in the envelope. I mean, you still had to mail it at that point. It took forever. I wasn't going to personalize each one of those. And I think I got one result back from the whole

Susan Tanner

thing. Right? I was at least fancy enough to be able to... I did a mail merge at the time, but I mean, the same thing, right? But now, we can feed in a job description, have a resume, have a basic outline of what you want to cover in your cover letter, and you can do a very personalized cover letter for every single job that you want to apply to. And we actually do know that personalized cover letters are better, that the people who read cover letters tend to have not...

This is going to sound really rude, but don't have a great imagination. So, they want you to really specifically point out that you have the qualities that they've asked for in your cover letter, and so, that takes personalization.

Alison Monahan

Yeah, so I think something like that is a great option, of just that kind of menial task and actually making it better. Alright, my final scenario: I'm having trouble finding practice multiple-choice questions. They're on my exam, my professor hasn't really given us very many. Is this something that maybe the Gen AI can help with?

Susan Tanner

So, I absolutely think as long as you're not expecting it to have a right answer, it's a great way to start, especially, as you mentioned before, if we start with something. For example, you're able to find some practice questions on the web and say, "Oh, but we covered these different topics. Can you adjust it?" Then I think it actually can do a pretty good job.

I'm in the middle of some work looking at how good of a job generative AI does at making up questions and exam questions and hypotheticals, from a professor's point of view. And it does an okay job. It especially does a great job with hypotheticals, because you don't need to know the law, right? You don't need to know the rules of law to be able to create the hypotheticals. You just need to be able to put in enough issues. And again, generative AI knows what the legal issues are, generally.

It might not be great at the analysis, but it's great at sort of creating an issue-spotting hypo for someone.

Alison Monahan

Yeah, we've experimented a little bit with it and yeah, the questions are usually better than the answers. But one of the things I do like about it is, if somebody is struggling with a topic area or a subtopic, it never gets tired. You can just keep asking questions and ask more and get more hypos and get more questions and really dig into it in a way that I don't think most professors or TAs or anyone is going to sit there for four hours with you and talk about negligence per se, for example.

Susan Tanner

Right. I'm always asking my students. They'll say, "Oh, I didn't understand what that term was." And it used to be a few years ago, I would say, "What did Black's Law tell you it was?" So we might start with that, but then obviously the answer for 1Ls is, "Well, I had no idea what Black's Law meant when they said it meant this." And so now I say, "What did generative AI I say it was?" In other words, you can do things like, "I don't know what proximate cause is. Can you explain it for me?

Now explain it to me like I'm a three-year-old, so that I could potentially understand what's going on here."

Alison Monahan

Yeah. No, exactly. That kind of reframing of the same concept in different ways, I think can be helpful. Alright, we're about out of time. If students want to learn more about this technology, do you have any suggestions? What should they be looking at or doing?

Susan Tanner

My main suggestion is to use it for things that don't matter. So, be really careful. I tell this to everyone I talk to - incorporate it a little bit into your daily life for things like planning your wardrobe for a week, or helping you pack for a trip, or helping you plan your meals, before you start using it for anything for school, because you don't want to be turning in something that is wrong or problematic.

And so, just getting to know what it's good at and what it's not good at on these sort of low-stakes questions is a good way to start playing with it. And then also, of course, be really careful because not every professor will want you to use generative AI. So, you have to make sure that not only do you look at what the policies say, but also checking with your professor before you use it for anything class-related, including things that you're not turning in.

So, I know that can be a problem - that even using it to help you study, if it's been prohibited by that class could potentially get a student into trouble. So, those are my two bits of advice play with it on the sort of easy, no-stakes things, and then before you actually start using it for anything for law school, just check in with your professors to make sure it's okay.

Alison Monahan

Alright. I think that makes a lot of sense. Any final thoughts you'd like to share before we wrap up?

Susan Tanner

I don't think so, other than I think this is sort of an exciting time to be in law school, for lots of reasons. But one of those reasons is that I think that the practice of law is going to shift dramatically in the next five to 10 years. And I think for anyone who's in law school right now, it seems like a less scary change than for those who are already practicing.

Alison Monahan

I think that's a great point. I think people actually have an opportunity to get up to speed, to be the experts on this. And then when they go into practice, they actually have a really valuable skill set of helping other people who might be frightened or whatever, actually use this productively. Alright. Well, remind us again how people can find out more about you and connect.

Susan Tanner

Either through my LinkedIn or through my page at the University of Louisville website.

Alison Monahan

Awesome. We will link to both of those. Well, Susan, thank you so much for joining us.

Susan Tanner

Thank you.

Alison Monahan

My pleasure. If you enjoyed this episode of the Law School Toolbox podcast, please take a second to leave a review and rating on your favorite listening app. We would really appreciate it. And be sure to subscribe so you don't miss anything. If you have any questions or comments, please don't hesitate to reach out to Lee or Alison at lee@lawschooltoolbox.com or alison@lawschooltoolbox.com. Or you can always contact us via our website contact form at LawSchoolToolbox.com.

Thanks for listening, and we'll talk soon!

Transcript source: Provided by creator in RSS feed: download file