From Textbooks to AI-Driven Education with Jessica Tenuta of Packback - podcast episode cover

From Textbooks to AI-Driven Education with Jessica Tenuta of Packback

Sep 25, 202359 minSeason 7Ep. 8
--:--
--:--
Listen in podcast apps:

Episode description

Send us a text

Jessica Tenuta is the Chief Product Officer and Cofounder of Packback, where she works with engineering, design, and support teams to design a product experience that fuels student curiosity and builds more confident writers. Outside of her time at Packback Jessica is the cofounder and creative director of Play Together, a Chicago-based non-profit art-and-music label that produces limited edition collaborations between a visual artist and a musical artist, with proceeds benefiting a selected Chicago arts education program. Jess has been featured in Forbes, the Chicago Sun Times, Fast Company, the Techweek 100, TEDx, and Chicago Inno.

Packback is an award-winning learning platform for improving writing, critical thinking, and motivation skills. Unlike generative AI models, which power tools like ChatGPT and can be used to generate complete passages of text on behalf of a user, Packback uses AI to coach students to build their critical thinking and writing skills.

Recommended Resources
Drive: The Surprising Truth About What Motivates Us by Daniel Pink

Get ready to explore the future of education! Join Edtech Insiders for a virtual conference featuring 30+ of the top voices shaping the future of Al + Education. A full day of keynote speakers, panel discussions, and networking!

Register now here:
AI+EDU Virtual Conference

Transcript

Alexander Sarlin

Welcome to Tech insiders where we speak with founders operators investors and thought leaders in the education technology industry and report on cutting edge news in this fast evolving field from around the globe. From AI to xr to K 12 to l&d, you'll find everything you need here on Ed Tech insiders. And if you liked the podcast, please give us a rating and a review so others can find it more easily.

Jessica Tenuta is the chief product officer and co founder of Packback where she works with engineering design and support teams to design a product experience that fuels student curiosity and builds more competent writers. Outside of

retirement packed back. Jessica is the co founder and creative director of play together a Chicago based nonprofit art and music label that produces Limited Edition collaborations between a visual artist and a musical artist with proceeds benefiting a selected Chicago arts education program. Jess has been featured in Forbes, the Chicago Sun Times Fast Company to tech week 100 at TEDx and

Chicago Inno. Packback is an award winning learning platform for improving writing critical thinking and motivation skills. Unlike generative AI models, which power tools like chat GBT, and can be used to generate complete passages of text on behalf of the user, Packback uses AI to coach students to build their own critical thinking and writing skills. Jessica Tenuta Welcome to EdTech insiders.

Jessica Tenuta

Hey, Alex, I'm so excited to be here.

Alexander Sarlin

I'm really excited to have you, your chief product officer and co founder of Packback, and you've been using AI even before this giant AI boom that we're all going through right now. Tell us a little bit about what Pac Pac is and what brought you into edtech.

Jessica Tenuta

Yeah, so Packback is a platform that supports students with writing and discussion, backed by AI feedback that they receive real time in the moment. So our whole focus is empowering students with not only the new AI technology that has come out, but what's been available to us over the last several years to help put feedback in front of students when they need it. Because we believe that by the time the student is submitting, they should hopefully already be successful, the technology

exists to do that. So being able to lead with feedback and help them submit their best possible work is not only exciting for the students, but helpful for the faculty because they end up spending much more of their time focused on the students content and ideas and engaging with the student as a human, as opposed to going through with a red pen and correcting.

Alexander Sarlin

I love that phrase lead with feedback that leads right into that, you know, formative feedback, do it right, give people the what they need to make their best work. I really love that. So give me a little bit of your background. Tell us about you know, you have worked in art and music and creative work of all kinds of writing. Give us a little bit of your story and how you got to the PAC back story.

Jessica Tenuta

Yeah, well, we founded Packback in college, we had four co founders. So we've been at the Ed Tech business game for the last roughly 12 years. And what we started out doing as a company is very different than what we do today. We actually started out like the story of many edtech companies, when they first start out of looking at how do we make textbooks more affordable, right

and out of that experience. We added initially it was very organic, a platform for student discussion around the textbooks. So if a student was using their econ textbook, they could join into a discussion with all the other students who's in that econ textbook around the country. And out of that experience, we launched that initially, really just with the goal of create a space for students to chat. And we started to organically observe some very interesting things happening in

there. This was much before the days that we provided AI based feedback or any of the structures or scaffolds that we provide today. We just watched what was happening in these communities. And we started to notice that some students were posting these really fascinating, open ended questions. One that always sticks out to me was a student talking about how much you can understand about the biome of the Antarctic area based on looking at what's captured in the ice. And he shared these

amazing resources. What was exciting for us to observe about this was that he was not incentivized in any way, not by a grade not by money, not by any external extrinsic motivator to spend the time and energy to post this. And we just looked at what happened on this thread. And all the students that responded got so engaged and shared amazing resources back and response and really delved into a deep conversation. And we started to ask ourselves, can we replicate this in a more system

wide way? Right? Can we take what's happening here and look at the traits of what this question has? Can we inspire other students to post similar questions? And that led us to our kind of like first baby venture into algorithmic coaching at that point, it was not machine learning based at that point. It was all hard coded Python algorithms to start off with And we use that as a way to say, what are the defining factors have an open

ended question? And can we start to coach students in the platform and also recognize questions that are not open ended, to send students feedback to help them edit and republish their posts to help open it up and make it better for discussion with their peers. And that just started to cascade. From there, we started to just look at what leads to better content, what leads to more motivation in the community, what leads to students to be excited to post better work and

gets their faculty involved. And we shifted from this very peer based discussion platform to a curriculum tool that faculty began to use as a way to implement inquiry based discussion in their classes, and PAC that did not invent the concept of inquiry based discussion or the concept of communities of inquiry. There's a wonderful research framework called the community doing grand model by researcher Dr. Randy Garrison, which we've heavily based a lot of our future development off of from that

initial point. But what we did do was build a technology platform that made it actually sustainable and manageable and tenable to host this kind of discussion. In any size class, I actually spoke to Dr. Garrison about two years ago, share what we were doing with him delved into his research to make sure that I was interpreting

everything correctly. And he basically shared that when he was initially doing this research, he knew that what he was proposing with communities of inquiry model would be challenging to do in a large class and hope that in the future technology would emerge, it would make it more sustainable. And the reason it's challenging is, if you have 100 students in a class, and they're each asking a few questions a week over the course of a semester, you end up with 1000s

of questions very quickly. And if a faculty member has to read and review and moderate and edit all of those posts, the most obvious answer is just to not do it at all right. And it's not because they don't want to, it's because it's too much time and energy and effort oftentimes to manage that kind of discussion. So that's kind of the first four that we had into using AI as a means to an end of driving towards educational best practices that have actually existed and been known and been

researched for many decades. So that's sort of our approach to use AI as a tool to help lead to an end that we want and not the end in and of itself and out of that initial product. We've then taken that same approach and applied it to long form writing with our most recent product

called PAC back deep dives. This is designed around helping students write essays and case studies and lab reports, long form style writing, receiving very detailed feedback on the convention and mechanics and structure and flow and even the research quality of the sources that they're citing, in real time, just in time before they submit their work map to the

instructors rubric. So they know before they hit Submit how well they're going to do on at least the conventional components of their assignment, so that the only variable is how is my content, and my idea is going to be interpreted by my instructor. And then they get to have this really meaningful interaction with the faculty member focused around what they've written, and not how they've written it,

which is really exciting. So that new platform has been used now launched about six months ago, it's very new into the world. And it's been used by for 1600 faculty and about 100,000 students so far, but our overall platform has been used by about a million and a half students to date.

Alexander Sarlin

That's incredibly exciting. So there were so many different educational ideas, in that I want to unpack some of them, especially for people who may not know some of these. So the community of inquiry model, the garrison model, if I remember my instructional design correctly, that's the one it's like students or students, students, teachers, students to content, right? How they're these three interlocking circles, do I have that right? It is.

Jessica Tenuta

And it's it talks about teaching presence, and the presence of the student in the content. So when we delve into teaching presence, it's how much of a role can the student have in playing a moderating role in a teaching role to their peers. And that becomes really empowering to have that control and that autonomy in the class. And that's a hard thing to do at scale, oftentimes,

Alexander Sarlin

yeah, and you know, using that peer to peer framework and giving students the autonomy to really dive in and create these incredible conversations. And you mentioned intrinsic motivation as part of that as well. So unpack that a little bit. Tell us about, you know, how self determination and motivation comes into the pack backstory?

Jessica Tenuta

Yeah. So when we observed what was happening in these discussion communities, and saw that students were in some cases choosing to go far above and beyond what we would have expected that they would we looked at what was causing them to want to do that. And there's a really wonderful model that we were introduced to by some of our faculty members that were using the PAC bed platform at the time called self

determination theory. And this is a framework that tells us about what are the factors are the environmental criteria that lead to someone being intrinsically motivated as opposed to extrinsic ly motivated? So in that framework, someone who is extrinsically motivated, might be motivated by money or by a grade or by praise, but intrinsic motivation speaks to is this coming from an internal desire to want to take

this action? And I think that's sort of like the Holy Grail in education that we hope that students will be showing up and doing their coursework and being engaged and excited about their learning experience for the day. Tire of doing a great job in the learning experience and just being genuinely curious about what they're learning. But that's oftentimes challenging because every student may have a different reason for being there a different internal purpose.

And the self determination theory model helps to give us a guide for how to more systematically support motivation and students and the three components of this model. The first is relatedness. So how much do I feel a connection between what I'm currently doing and my personal purpose or my personal goals? And there's so much delve into around that of how do you build relatedness or purpose for students. But the second piece is autonomy. And that relates back to what we

were just talking about. If we can give students the opportunity to have choice and control over their learning experience, it changes the whole relationship to what they're learning. The example I always like to use is, people run for fun. But when you're told to run in gym class a couple miles, it's far less exciting. So even like the same activity, if you choose to do it yourself, you

can love it. If you're told to do that activity are required to do that activity, it changes your relationship to it immediately. So creating opportunities for students to choose their own journey and their education makes a huge difference in how they feel about the activities. And the final piece is competency or sometimes referred to as mastery. That piece is really powerful. And it's where I think the role of AI is most exciting in higher education. And in K

12. Which is, do you as the individual feel a sense of capability or mastery or competency over the action that you're taking right now? And more than just Do you feel that are you getting feedback that helps to confirm that along the way. So those reinforcing moments to be able to have some feedback, especially if you're doing something new for the first time to get indications that you are progressing on the right path, and that you're doing the right thing is highly

motivating. So as an example, that's oftentimes used around video games, actually, video games are super motivating, and super engaging. Part of the reason is, you're getting all this immediate, super quick feedback as to how you're performing, you make the basket in the video game, or you hit the target in the video game. And you know, immediately I did that thing. And I succeeded at that, the moment that we can create those same kinds of

experiences. And I know gamification is such a buzzword, and it has been a buzzword in education, but less about making it a game and more about creating those very short feedback loops to help reinforce the actions that are being taken can be highly motivating. So we've tried to take those three principles and incorporate them throughout our product design philosophy at pack back. So as I mentioned, on our platform, instead of instructors asking questions, students are asking

the discussion questions. As the students are receiving feedback, they're being given a choice, they're not mandated, it's not a gate to be able to clear before they can submit, they're being given all this feedback. And it's ultimately their choice as to how they interpret and use that feedback, right. So it's front loading all the information to them all the feedback, and it's still leaving it up to their individual choice

as to how they act on that. And it creates, I think, a very powerful sense of control and agency over the learning experience for the students. And then for the faculty members, what they're ending up seeing. It's, I think, a piece of this whole puzzle that doesn't get talked about as much, which is faculty motivation, and faculty engagement in their role can be

very demotivating. And very hard for a faculty member to stand in front of a class of 500 students, and not get a lot of feedback, that members also have the same experience of needing to have relatedness, competency and autonomy and their experience, right. So how can we create experiences where they're seeing the students get really excited about their learning.

And that's what we find really powerful about the conversations we have with the faculty that use PAC back is seeing when they get to see what their students are curious about and what questions they're asking. It's super motivating back to the faculty member to see they were listening to me, they heard me they're excited, I now know what they're excited about. And I can go deeper on that. So that ability to kind of take these long standing, I mean, the self determination theory model was

created in the mid 1980s. This is not a new model, this is a best practice that has existed for a long time, our ability to look back to well research, academic best practices, and then build tools that just make those a little bit easier, a little bit more accessible, a little bit more scalable, has

been our approach. And hopefully the technology starts to take a backseat to the experience people are having around just making the best practices that they would have wanted to create, possible and easy.

Alexander Sarlin

What excites me so much about hearing you talk about this is that you're taking research, you're taking academic research out of social psychology out of education, and literally letting it inform the product development impact back and combining that with what you're organically seeing your users do over time. So you know, you mentioned it was a very organic evolution from textbook annotation to feedback to this

inquiry based model. And along the way, you incorporated all of these really powerful theories. One thing that jumped out to me and I'd love to double click on it with you is when you say the rubric that by the time the student submits their work, the parts of the rubric that are sort of the baseline Write having your, you know, I'm sure that's things like, you know, organized thoughts and grammar and spelling and making sure that it's cohesive. And some things that a computer can tell

you are covered. I mean, a smart computer can tell you a day I can tell you, but then it allows the faculty to not spend time with that red pen not spend time thinking about, you know, did this sentence lead to that one, but think about the actual ideas the student is putting out there. That is really a thrilling vision for what AI could do to complement teachers. Tell us more about how you got to that model. That's really interesting.

Jessica Tenuta

Yeah, I think maybe the most exciting and interesting thing to share is this was again, driven by organic feedback from our faculty saying, I'm seeing really cool things happening in the discussion. Can we do the same things for long form

writing. And when we heard that feedback, we delved into many, many, many, many hours of user research with our faculty to learn more about their experience, trying to teach writing or trying to accomplish writing across the curriculum in classes that might not be

writing centered. So in that process, we spoke to history teachers, and biology instructors, and even in some cases, computer science instructors who are teaching courses around professionalism and computing their teaching business skills to computer science majors. And I was so surprised to see the continuity of themes across all these interviews and across all these disciplines. And what we were hearing was, these were both high school and college

instructors. And interesting, I'll focus on the college instructors for a moment, they spoke really across the board to this experience of having a very wide range of writing abilities

show up in their class. So depending on where a student came from, what their experience was with writing previously, they'd have both someone who is extremely comfortable and confident writing in the same class was someone who is sort of at a remedial stage for grammar and mechanics, and you're trying to design assignments that speak to both of those students. And you're trying to design feedback rubrics that speak to both of

those students. And it's an incredible challenge that the problem that many of our faculty we spoke to cited was, when a student submits something that has so many kinds of grammatical or mechanical errors in a history paper, for example, they have to make a choice, do I spend time correcting all these errors, and it's not just their time, it's also the student's mental energy to receive all

that feedback. At that time, if you put a bunch of grammar corrections on a history paper, you're watering down a bit of the feedback around content ideas related to that specific assignment. So you're left with this really terrible choice of Do I just overlook, and not use this as an opportunity to help students with their foundational writing skills, and just focus

on content ideas? Or do I overwhelm them with many, many, many pieces of feedback at the time when they've already submitted the work and they can't immediately act on it. So what I see as the biggest, exciting potential for AI is, there is no reason that feedback needs to be a two week or three week feedback loop anymore, it can be something that happens immediately for students, and is something that is provided to

them before they submit. So that is obviously something that Pac Pac does, but it's not exclusive to our platform. This is a paradigm that could be applied across education, to pulling feedback up earlier, letting students receive as much feedback as possible around kind of the baseline of their work and bringing every student up to a baseline that then can be evaluated equally by their faculty. And you're not having to make one rubric for one set of students and one rubric for

another set of students. So I think and I'm going to lift a quote from a dear friend of ours, Dr. Bryan Arnold, he actually wrote this in a white paper that we recently just released together, he said, If the test used to be bike five miles with AI, the test can now be like 25 miles, because with these tools, we can expect the students to go so much farther and faster, and achieve so much more in the same amount of time. Because we're now scaffolded by these amazing supportive

technologies. Yes, it's been a really interesting set of learnings that we've had that are just surprising to hear how much this is on faculty members minds of how do I really speak to and support my students writing because they know how

important it is. And they also know that if they're not a writing instructor, they're wondering if this is the right venue through which to be able to spend their time providing writing feedback, or just try to focus on what is the student trying to say, and help them advance their ideas. So our hope is with AI, they don't have to make that choice.

Alexander Sarlin

I'm sure this is resonating so much with listeners who have ever had to grade a history paper and look at mechanical errors or grammar errors, it really is demoralizing for both sides. I mean, the instructor is faced with this bizarre choice where you can either drown your student in red ink, and by the time they get the real response to your act to the actual thesis of the paper they've already

given up. And the students, you know, are having their ideas sort of weighed down by the complexity of the language or by the difficulty they have the language and especially with English language learners. We didn't even mention that right? So you have this enormous I love your phrase, you know, raise the baseline and it feels like you know At this moment, this really inflection point moment with AI multiple types really just

entering our lives. And in a daily way, this feels to me like the right metaphor for what AI should be doing. It's not about cheating. It's not about you know, danger. It's not about singularity. It's about making it so that all of us can have a shared baseline, nobody has to misspell, nobody has to worry about grammar, nobody has to worry about their ideas not making sense, they can get that

for free immediately. And then they can think about what they actually want to say and what how they want to persuade and how deep they want to go and what questions they want to ask. That should be very exciting to all of us as educators, but we just have to make it happen. What do you think about this new breed of AI this sort of catch up to generative AI? I'm sure you're thinking a lot about it, because you're already so deep in the AI space? How will this

continue to play out? How can we make sure we do actually realize that future of going from five to 25 miles and instead of having faculty for students to write by hand again, you know, which is what we're seeing in some places?

Jessica Tenuta

Yeah. And I think that that reaction is totally understandable to I think, you know, I have even though I work in AI, I feel a range of feelings about what's happening right now, mostly just excitement and kind of gratitude that we're alive at this moment to witness what's happening and how fast things are compounding. But there's also a degree of like, bitter sweetness in it, like just being honest about my experience. I studied art and

design in college. And there's something I don't know, it's something that both like, reflects and deepens the most truest parts of humanity, and also makes us question like, what's our role in the world, when you can go to an image generation model like mid journey and prompt it to illustrate and your style and it does something better than I could have done in eight hours in eight seconds, right? And I think we're all feeling that question of like, what is our role? And not to get too

philosophical? But I think it's an interesting question of like, if we weren't afraid of being displaced, if our livelihoods didn't all depend on these things, would we still have the same kind of nervous reaction to what's happening right now?

Would it be pure excitement, and I think all of us who are working either in AI or in adjacent fields to AI, I think, have a responsibility now to try to do whatever we can to nudge the trajectory of where we're headed towards one that is human centered and focuses on enriching human lives and not in the direction of automating humans out of their roles for so I don't believe that that's a

realistic, full possibility. But I think we all have a responsibility to do what we can to nudge the trajectory of this towards one that is focused on enriching human lives and freeing up capacity to help us go farther, as opposed to reach the same end target with fewer resources involved, if that makes sense. And your question, just to bring me back to because I went off on a tangent here was really how do we think about guiding faculty and students to to make the most of these tools?

Is that right, Alex?

Alexander Sarlin

To make sure that people use it in a way that asks more of their students and ask them to go further because they have these scaffolds as you say,

Jessica Tenuta

yeah. I don't think we've ever witnessed this speed of compounding change that we're seeing right now. And I think that there will be an urgent need to adapt and change our curriculum in the face of what's happening right now. Generally, I think there will need to be a greater shift away from executional skills and

towards critical thinking. And we've been moving in this direction for generations, right, but towards critical thinking towards critical inquiry towards strong communication, rhetorical skills, and really towards generalizable problem solving skills of how do I how do I speak to these tools in a way that allows me to achieve the results that I want, which is not guaranteed, and I think it's the the one risky kind, it's

like the benefit. And the downside of the open ended nature of chatting with large language models is even the people that have created these tools, like open AI, don't yet know the entire extent of what they're capable of. Right? They have teams of people prompting and trying to figure out what are the emergent capabilities that these models are capable of as they reach a certain degree

of scale. We've never dealt with technology like this, all of our prior programming has been fairly deterministic in nature. We're now almost speaking to a new entity, not saying it's generalized intelligence by any means. But we are speaking to an entity that we don't fully understand what it's capable of, and being able to provide educators and institutions with resources to help move quickly in response to this change.

Because I don't think we're talking on a five year scale of massive change here in our society. I think we're talking on a one year scale of massive change. So I really feel for educators right now, we just came out of the heels of the pandemic response where people had been forced to move online very quickly. And now we're saying how do we move and respond very quickly to changing our curriculum in light of AI?

Yeah, it is a tough question, but it's one that the sooner that we begin to incorporate new methods of focusing on the critical thing Getting oriented skills for students. I don't think it's about adding prompt engineering modules to a history class. I think it's about deepening core communication skills, baseline writing skills, critical thinking skills embedded into those courses in a world where we might be able to assume that sort of factual knowledge is fairly

commoditized. What changes its value and what deepens its value is human critical thinking skills and knowing what you want to achieve goal setting skills, and the ability to differentiate between a good result and a poor results, right. So it's much more of a focus on those higher order thinking skills, that they're not being introduced. Now, as a problem. We've been working towards this for many, many generations. But it's only more important today.

Alexander Sarlin

There's a few things I want to call out in there that I think are so interesting, you know, when you talk about going from a fact based model to critical thinking, I would even argue that it's more than that, that, you know, Google took us past a fact based model, you never you don't have to ask students to look up fact, generative AI or in this kind of AI does sort of baseline critical thinking, right? It can actually evaluate whether your argument makes

sense. What that should do is push us to, as you say, really high order critical thinking, push us up Bloom's Taxonomy, you should be creating new ideas and expecting our students to create new ideas. Because if you're expecting them just to synthesize or to, you know, analyze something, she had to be DECA do three quarters of it for them, so they pushes us up. But I actually want to throw an even

wackier metaphor at you. And I'm so curious what how you respond to this hearing you talk about art and that bittersweetness. You feel about mid journey, you know, replacing potentially having the opportunity to replace you know, more more, you know, handheld art skills. It makes me think of photography. And I've never thought about it this way in terms of the AI moment. But when photography first came out, I think a lot of people saw it as a cheat code,

right? They saw it as Wait, I've been learning to do this incredibly realistic shading, you know, you think about like Rembrandt capturing light. And suddenly you just point this thing, and anybody can do it. And you have this perfect picture, like, what the heck, you know, why should I be learning to do art and realism? But then you look at it further.

And you know, what did people do in response to photography, they created Cubism and Futurism and all of these incredible abstract expressionism and you start to see this reaction to that kind of rigidity. And I feel like that's going to happen here, too. I feel like as everybody can use chap, GBT to make a story or make an essay, what we're really going to value is this deep, authentic human

voice. I'd like to, you know, unpack what you mean, when you say human centered AI, because it feels so important to think about that right now.

Jessica Tenuta

First of all, I totally agree. And I think it's when I speak to that bittersweetness, it's almost it's just kind of poetic to see that these models have just been trained, what they're producing is essentially a reflection of everything that has been created in the past and an extrapolation of that. So when we get a beautiful result back out of mid journey or out of chat, GBT, it's a reflection of the beautiful things that humanity has created up to this point. That's nice. And I completely

agree with you. In the generations when photography was reserved for a very selective view of the very rare people that had a camera on could afford it and the very few people that then could even afford to go to a photographer to get their portrait taken, right? Did photography lose its value? When suddenly everybody had an iPhone? Absolutely not. It just opened up a method of creation that previously wasn't available to the general public.

I think what's interesting is even I was mentioning how even the folks that are the most experienced these models are still going through a period of experimentation. It's interesting, you know, there's a degree of experimentation that can be done with a camera of, you know, how do you set your aperture and how do you use flash and what's the lighting you want to shoot in. And yet, there's sort of a set of rules that you can learn about how a

camera behaves. And similarly, there's sort of a set of rules, people are discovering today about how all these models behave. And there's some really fascinating folks that are documenting their journey of how they're experimenting, working with these tools to try to get the best result out. And you can see already, the people that are kind of on the vanguard of testing the extent of these models, what they're doing is not purely getting something back out of these models.

They're taking this to a degree where they are creating something original that has their voice in it. And it's just another method of what ends up being valuable is, what are you trying to say? What are you trying to do? Yeah, the models are just a means to an end, right? They're just a way to

achieve that. So if you have a powerful message you're trying to get across the your art, whether it's executed with paint, or with photography, or with AI, the end goal is what are you trying to say with it? And I think that that is one of the challenges of these models to is in their open endedness they are not entirely forthcoming with what they're

capable of. And that's something that we should be aware of for students today as well of I think that these pose both have potential to be a tool for accessibility and for advancing equity. And they also run the risk of deepening equity divides and making tools less accessible. And what I mean by that is, in product design, there is a concept known as affordances. Right? When you use a stove, you see that the stove has knobs on the front, and the

knobs are round. And they turn and they have, you know, little hashmark indicators that, you know, tell you if you know what degree you're turning the stove to, that tells you this is a stove, I see a handle, I should turn the handle and the handle will do something, that concept of an affordance is baked into all of how we design software currently, right? It's what a button tells you something can be clicked a drop down tells you something can be dropped down. And there's options available

here. But when you're chatting with a chat interface, and I know by no means are these models only accessible through chat, but I think that's been the most common way that it's being talked about in the media today. And folks are interacting with it today. The sheer open endedness of these tools. And the open blank box of a chat box means that it's really up to the individual to experiment with to find the limitations of these

tools. And the people who are comfortable experimenting and comfortable spending a lot of time to see what they're great at and researching what they're great at, or going to have amazing gains and what they're capable of. And then you also hear I've heard examples of people going to Chapter TBT. And saying, I asked it for some restaurant recommendations, and it gave me some restaurant recommendations. I don't know why everybody is so excited about this. It's not that much

different than Google. Right? Right. But just depends on what you're putting into it as to what you get back out of it. So I think AI literacy, I imagine is going to be an area where we're going to see a large trend emerge and education over the coming months and years of how do we bake the concepts and the approaches to being aware of how AI works, what its shortcomings are, what its capabilities are how you continue to learn and test and grow your capabilities

of using these models. baked into the experience of being a student today, I imagine that higher ed is going to look very different for the students that are in K 12 today than the students that are in in higher ed right now, even on a one to two year scale, I think we'll start to see dramatic changes in the type of assessments and the type of assignments that are being done, hopefully driven largely by this concept of helping students become literate and comfortable working with these tools.

Alexander Sarlin

So when you talk about the sort of the affordances, and the user experience and the user interface of this, I think we have not even begun to fight that battle. And you think about some of the tools that are out there around code. Now it's you know, the no code tools, or even before the no code tools, you can lift up the hood and start coding, you can lift up the hood and start writing, you know, CSS, but you also have these drag and drop interfaces, and these color pickers and all of

these things. And I actually think the same thing is gonna happen with AI, you can take up the hood and start chatting with it and telling it what exactly you want. Or you can close the hood. And there'll be these nobody's made them yet. But these beautiful sliders and knobs and, you know, different styles to pick from. And you see some of this in the AI apps, you know, in the on the phone apps, you'll see some AI things where it says, Oh, you pick from these four styles, do you unrealistic

or artistic or whatever. And that's the beginning of this. But that could get very sophisticated. And I'm trying to think about it in the context of writing tools that you're making, you know, you can imagine a slider to be more and more empirical versus more, you know, theoretical, it is such interesting things you could do here. And then of course, you can just pop the hood and and just start you know, asking it for what you want. And the people who are going to go

really deep with that. You also mentioned equity. And I'd love to understand a little bit more, because I think the relationship between these generative AI tools and equity is really important. I mean, we just saw, you know, the country of Italy band chat GPT. What does that mean, for the country of Italy? Does it mean they're going to be held back in there? Like they are crippling their future workforce? Or does it mean they're protecting their people from the chaos that's going to

ensue? Like, how do we balance that idea of equity and sort of chaos? I don't know how else to put it, you know, unpredictability.

Jessica Tenuta

I feel like I am not qualified to comment on the policy level decisions there. I think it's a really challenging choice. I don't know if there's a right answer today. And it's why it's sort of just fascinating to witness all the different reactions that are happening today. But what I do think is I'll take it into education specifically. And what I think is, I almost see an intersection of three things, right? What is the capability of these tools? What can those capabilities allow a student to

do? Are they accessible to the student both in cost and in technical accessibility? Right? Can everyone actually use these tools if they wanted to? And then the third ring of that Venn diagram, which I am not seeing a ton of conversation on right now is user experience wrappers around these tools, and I think it overlaps a little bit with accessibility and capability, but it's that final leg of the stool, if you will, of these tools are incredibly powerful.

Right now they're not terribly expensive, but we have no idea how these companies will continue to change and grow over time, what we need to do is think about scaffolding AI literacy. Right. So just to make it very concrete, think about how this relates into the pack back platform, just as an example. And I'm sure there's other examples we can connect it

to. We're not dropping students into a blank document with a blank chatbot, we're thinking about giving them initial guidance that's scaffolded by what the instructor wants to see, they're getting started off with some guidance around how to get started. As they write, they're seeing feedback come in, that's contextualized, to what

they're writing. So I think these and then as they continue to write, they're, they're having additional content and context being able to be passed into the models to give better

feedback as they go. Right. So I think the real opportunity to advance equity is building tools that help just bring every student to a baseline starting point, to discover the capabilities of these tools and guide them through, again, back to what are we trying to aim towards, which is hopefully a student who is self motivated at the end, who has a strong baseline of critical thinking and writing skills, and knows that they are performing at a degree that is going to allow

them to succeed towards their goals and their purpose and their education? How are we using these tools to build specific applications that lead to those end goals? Right. So I think it's taking it out of the realm of the esoteric and into the realm of the concrete, which maybe sounds uninspiring because what's kind of magical about them right now is how open they

are. But also, that's also the limiting factor of them, right of being able to bring it into a world in which the faculty and the student know how the system is going to behave, and can rely on a degree of consistency in those systems, and that it has all the connections to the other tools that they need to be able to actually have this solve a problem and not just be a cool tool. And I think that's where we're going to see a lot of new

companies come up. And a lot of companies that already exist continue to expand is in building those very specific industry level and problem level applications that pull in GBT. I think right now, the chat Chatter is all about GPT. Right, as a cool new tool, I think it's almost going to become a ubiquitous resource or infrastructure level tool that's baked into most products at some

level. And I think the real differentiator between the companies that will win out of this, and those that might be sort of a flash in the pan is, what is the problem you're solving? The funny thing is, it's all in a big cycle back to do you have a really strong user need for what you're trying to do? And are you building the right solution for that user need, as opposed to trying to jury Reagan a specific tool? That's just cool at that time,

right? So I know that you share similar thoughts, but that's where my head's at. At the moment.

Alexander Sarlin

I am right there with you. And you know, it actually, I want to resurface your really great analogy before about when you put cameras in everybody's hands what happens. And you think about it, like when the first iPhones and camera phones came out? Yes, it was pretty cool that you had a camera first people didn't know what to do with it, they take pictures of their lunch, they take pictures of the sunset, then they realized they could turn it around and take a

selfie. Oh, interesting. That's something we weren't able to do or weren't usually doing in life, then Hipstamatic, Instagram comes out. And it says, you know, what, we know you want to take selfies and take pictures at to make your life look incredibly glamorous, and interesting and nostalgic,

and whimsical and elegant. And we're gonna give you these preset filters, that you don't have to know anything about cameras or lighting or anything to do use, and literally change the world with a product like

that. And all they're really doing is taking an existing tool that people already had, which was the camera and adding a layer that allows them to organize the information and get to what they actually want to do, which is you know, put words on their pictures and put make it black and white or certain

colors. I think there's something really interesting metaphor, therefore I agree the chat GPT and the sort of these models will just be inside the tools will be accessible from everything almost already are. You can get to them from notion you can get to them from Google Docs. It's pretty crazy, like in months. So then the question is, what is the interface? How do you get people caught up? And how do you make sure that digital divide doesn't divide? Let me ask you this, you work

with faculty and students. One thing that I've learned over time in technology is to never underestimate young people and how creative they can be with new technology, especially when they're social and they can share ideas rapidly with each other. Like, I think that while we all talk about this here, there are some Reddit channels somewhere or there's some places on the internet where teenagers are coming up with things we

can't even begin to imagine. I think it's already happening and you will just see it come out soon. But I'm curious how you like how you see that idea. Like when you talk about AI literacy? Are we helping people by giving them these preset filters or preset notions? Or are we

holding them back? How can we make sure that in 10 years some people don't have incredibly sophisticated AI skills and can just do anything with almost no work and time and others are still slogging along and doing it the old fashioned way.

Jessica Tenuta

Yeah, I think as simple as this answer might seem, what I've observed in direct interaction with these models is that what allows them to perform well is being able to formulate the goal that you have, in very concrete language, very similar to honestly how you would interact with a human. And I think that's what's different about these tools is, it's almost like learning how to talk to a human that you can't make any assumptions about what they

know. Right? So it's like, if you had to formulate a request to a very smart and capable two year olds, right? You can't assume that they've seen everything in the world yet, you can't assume that they are going to be able to read between the lines of your prompt people that have strong general communication skills and a strong vision for what they

want. So not to not to bring it back to that that model, but self determination theory, right, if they know what they want, and they have a strong sense of purpose, and they know what their goals are, I think that's going to be the key differentiator between the people that get the most out of these models and those that don't mixed with how effective are they are, how competent, are they just communicating with the models directly, I do think that

there is a potential risk. And we saw it with kind of the the WYSIWYG or GUI based editors that popped up like Squarespace. At that same time that just general web development, or the ability to have a website became easier and more accessible. The technical complexity of the whole ecosystem of code behind the scenes where people were working in computer science

became more complex. So I think we will continue to have that divide, grow, those that are actually directly working on the guts of these models, will be different than the people that are consuming them. That said, the people that are consuming them are now going to be capable of doing with one person or a handful of people, when a company might have needed 20 people or 100 people to achieve

previously. So I do have really high hopes and excitement for what is this new medium going to allow us to create in terms of new genres of content. And in terms of new genres of companies that were not previously thought about, in the same way that the motion picture camera created the genre of a movie, right? Like we couldn't have had that until we had that new technology. So I think we're just at the very cusp of seeing where this takes human

creativity. But it can only if it follows every past experience that we've had with new emergent technologies, it was only going to create an explosion of new capabilities and not a contraction. So I think I've been, you know, seeing opportunities to challenge any mindset around scarcity that we might have have, are these opportunities, ones that are fixed, and are going to be reduced by these tools? Or does it actually free up capacity to create new new concepts and new ideas,

Alexander Sarlin

to thoughts come to mind in response to what you're saying. So I think you're really onto something incredibly important. You know, one is, when you talk about the intrinsic motivation, and how to build it and students, one advantage that I think some of these tools have, especially if we can make them accessible, and relatively easy to use, is that even a young student can make incredible results very quickly

with these AI tools. I mean, I've seen, you know, 10 year olds go into mid journey, and make things that just would blow your mind. And as soon as they see that they can do that there's this sort of flywheel start spinning, there's a big difference between having that learning curve of learning to, you know, stipple, and learning to draw these draw lines and shapes and negative space. And you know, it takes a while to really get good at classic art.

And sometimes it's painful. But if you can immediately see amazing results, there's a motivating factor that I think comes into that that we're going to see really start to happen. I'd love to hear your thoughts on that. But the other thought I have that I really want to ask you about this as well, is that, you know, AI has this incredible capacity to create new genres, as you're saying, it really

does. I love that one of my favorite quotes of all time about media is is McLuhan and basically saying that whenever a new media comes out, the content of it begins as the old media. So you know, when TV came out, and they put radio shows on TV, or concerts on TV, and I think we're seeing that happen right now, people are going into AI and saying, Oh, I'll make an essay. I'll make a really cool picture. I'll make a video. But I mean, if you really step past

that, I mean, what can I do? I can make people, I can make characters, I can make characters that are intelligent, that can interact with each other and do emergent things like there is so much there. That's very meta that I don't think we've even begun.

Jessica Tenuta

It reminds me looking at a 10 year old discovering that with a bit of prompting, they can create this beautiful image and they can start from the end goal that they were aiming towards. And then if they have interest they can back up into okay, how might I have achieved that through different media right if I if they care to learn the traditional media, they still can do that. But they've started now from I've been able to realize the picture I had in my head and I can now back into

that. It reminds me of the concept of bottom up learning versus top Down learning, right? Are we taking a bunch of tiny modules? Right? Are we learning about cell division? And then are we learning about biomes? That are in biology? Right? Are we stacking all of these individual tiny concepts up to some bigger picture that might take two or three years of, you know, subsequent classes to be able to reach that bigger

picture full understanding? Or are we starting with the top down model of here's a concrete application in your real life? Now, let's break that into what are the phenomenon and concepts that lead us to have a strong understanding of that? So are we starting with the end goal? Or are we starting with the constituent pieces and trying to eventually hope that people build enough of that understanding to scaffold up to the end goal, right, so I felt

it too. Like, I feel like we're at this moment of discovery right now of, it's almost like being able to, and I was there, but I was young at the time to see the the real emergence and widespread usage of the internet, right of there was so much initial discovery and

creation that happened. And interestingly, a lot of the really interesting and exciting stuff came from people in their bedrooms as a, you know, a preteen or a teenager experimenting with what was possible on a MySpace page and getting a basic website set up. And those people became the developers in computer sciences of today. Right. So I have a lot of hope and excitement for seeing how much play is coming back into the experience for folks and excitement and

discovery. And there's a sense of newness that's happening right now. And I think the ability to start with seeing the end goal so quickly, and then figuring out how do I back into that. It's very much a motivating experience, I think, I'd love to hear your thoughts from an education perspective on starting with the end versus starting with the small pieces and trying to get to the end eventually.

Alexander Sarlin

I mean, that's problem based learning, that's project based learning, you know, some of the most sophisticated educators in the world teach top down, they say that famous quote about, you know, showing people the vast ocean that Antoine de Saint Exupery, quote, it's like, show people where they can go, and they'll find the means to get there. I think the top down is such a more motivating model for almost everybody. I mean, but that's just my take, what do you think?

Jessica Tenuta

I totally agree. And I think it's just such a short line between the first experiment of playing with these things and seeing that they can do something wonderful to be able to then dig in more deeply

and keep experimenting. And I think driving from the student or from the learner, or from the individual going through the process of experimentation to get to their end goal is just going to always be a more autonomous and integrated experience, right, where yourself as a part of the process of learning.

Alexander Sarlin

The second idea was about these new genres, and how I really love that concept that you're saying about how AI will create, you know, new forms of media that we're not even envisioning yet. And that might be what the future really looks like, we leave video behind and everything's interactive, or i It's hard even to imagine,

Jessica Tenuta

there was a really fascinating study that was recently known if you saw the white paper for this, but there was a research study done where they set up a bunch of little instances of GPT as individual characters and gave them a computer memory. So they had memories of what was happening. And they put them into a little simulation and ask them to go and act like a little village of people and plan a party together. And it was fascinating to see the humaneness of their

interactions. And the fact that once they had a memory attached to these bots, they were able to perform fairly sophisticated levels of planning. And it's similar to what we're seeing now start that's kind of the talk of the town right now around auto GPT. And those and the ability to use these kind of sub bots and sub routines able to automate and loop through multiple tasks. And I guess my point in saying that is just the speed with which things are

changing. If I could only hope for one thing happening in the education world today, it's just that we're all watching what's happening, and continuing to stay curious about what's happening and get in there into these tools and see what they're capable of and try it. I think that we have no idea what it's going to look like at the back

end. But I feel very confident that staying aware of what's happening and experimenting with it not only is good to stay abreast of recent technology emergencies and think about how

it relates to the classroom. But it's also just a wonderful opportunity to be curious as a human being about where technology is headed and see what it inspires in you and in what you're excited about and what it might bring up as new ideas, whether in the classroom or out of the classroom for both educators and students.

Alexander Sarlin

That feels like an amazing note to end on. Even though I think we have so much more to talk about. Let me ask one very quick question before we go to our final two, which is you have been working with AI impact back for a number of years. And now this new wave is coming. And I know I asked this a little bit at the beginning but let's get futurist a little bit. Two years from now we know this is going to happen

fast. What do you envision from your product since that pack back will will evolve into what will continue to add? Well, how will it make these deep dives richer and richer? How will you know? I'm curious how you're seeing this Right now with your product sense.

Jessica Tenuta

Yeah, well, I think I mean, I've said it several times throughout the conversation, I don't think the need for writing skills is going anywhere, I don't think the need for critical thinking skills is going anywhere. What I do think is happening is I've seen some new companies come into the space really focused on using GPT, sort of to cheat, right?

And I think what's going to happen in education generally, is the notion of what is cheating, and what is plagiarism is going to continue to evolve, I anticipate that we're going to see a shift away from a focus on plagiarism detection holistically, not just AI detection, but just plagiarism detection, towards hopefully a world in which we are thinking about how do you prevent

academic dishonesty? And also, what does academic dishonesty mean in a world in which none of what we're creating is ever entirely created by ourselves? And to be honest, today? Are we entirely creating anything ourselves? Or are we standing on the shoulders of people that came before us and taking the things that we've learned honestly, humans don't learn terribly differently than large language models learn, right, we take in a wide degree of input

and stimulus. And we take that into our knowledge base into our memory, we use that as our training data. And we extrapolate from there to make

new ideas, right. So how much of what we produce as new ideas is really new, and how much is influenced by the entirety of the community that we're a part of in the broader world that were a part of all that to say, the way that we're thinking about using our technology is to keep deepening the access to support for writing and critical thinking development skills that students both need in the classroom, and also genuinely believe that they want outside of the classroom to, I think

that there is a desire right now to want to use these tools and use them well, but also use them in a way that is sort of above

board. So the way that we're thinking about this is not to go in the direction, obviously, of the tools that are using GPT just to write on behalf of the students, our space, and the area where we want to keep developing is and how do we use AI to scaffold and support and provide coaching to students in a way that helps them accelerate their writing, development and accelerate their ideas, but not ever take the place of them right in their own words or a faculty right in their own

feedback, right? So how do we become this sort of invisible tool that is providing support and scaffolding we are thinking about and we're currently experimenting with and are on the on the verge of releasing some of these new tools around, bringing in more of what is possible with GPT in terms of broader structural feedback, especially with the context of all being able to pass in, if we know the whole context of their essay, the whole context of all their writing previously, we now

can do really powerful things with JpT, around helping students to develop their unique voice, right, if we know the whole trajectory of their writing experience on the platform, to be able to help them talk about developing and pushing their voice greater to push for greater consistency to

push for greater uniqueness. So I think that's a direction that we're looking at right now is baking in a lot of these interactive writing features into our platform in ways that we're not doing anything dramatically, wildly different. We're just dramatically leveling up what the platform is capable of right of how do we take this platform for writing and push it to its next logical next step.

And next degree in which students are actually learning how to use these JPA GBT based or LLM based tools as a part of their writing process and learning how to properly attribute them as well. That's been a big source of it's, I don't know if you saw this, but several large and very well, well respected academic journals now are allowing folks to submit papers partially written by GPT.

So long as they're properly accredited, and the bibliography and their guidelines for publication focus on you as the as the writer as a submitter is still being seen as the author of all that work. So you are responsible for anything that you submit that might have come out of GPT, basically, meaning you can't just trust it at face value, you are responsible for editing and fact checking everything that's a part of that when you submit it to the

journal. So this is the one that I'm referencing here is the one of the leading astrophysics journals, putting that into their their publication standards to accept the use of GPT written text in academic journals. So basically, at the highest level of academic respect, we have journals, acknowledging and allowing the use of these tools within their systems. So I think that's going to be a big factor in driving the tone of where we head from

here. So our goal at packed back is how do we help students learn how to use these tools within the writing experience, learn how to properly attribute them, and be able to accelerate and do more with their writing, as opposed to taking the place of the writer in the driver's seat, right. So our hope is to always be their writing assistant, and never be the writer.

Alexander Sarlin

Absolutely. And I'm really excited for the future of Pac back and the future of AI enhanced education tools, especially when they're led by leaders as thoughtful and informed about educational theory and psychology and product as you are. So this is really an exciting conversation. It just blows open what's possible and how we can raise the bar for everyone. So we end every conversation with two

questions. The first one is going to be what is a trend that you see coming around the bend into our ed tech world that you think the listeners should be aware of? And I'm going to just go ahead and put AI as an answer on the side it can be something about AI, but it has to be something we have not heard about before.

Jessica Tenuta

I just touched on it in my last answer, but I'll underscore it a little bit. I think that the whole subset of the education market focus on plagiarism is going to be going through a great degree of upheaval right now. So I think we're hearing it in our conversations with our partners and with our faculty of well shoot, if students can just take a chunk of text from an AI, why would they ever really be plagiarizing from a living human unless they simply don't know that they're not supposed to do

that? And that's a whole different problem, right? intentional plagiarism versus maybe just not knowing the proper way to attribute sources. And in our experience, in fact, back we've observed that to a lot of times when students may make a misstep around plagiarism, it's really around did they know the proper way to attribute the citation? Right. And so I think that's going to be a big shift right now. And I think schools are starting to question, where is their

investment best spent? Is it best spent in detecting, finding and policing academic honesty? Or is it best spent in investing into these new applications that help to pull feedback forward into supporting students success through AI, and to supporting AI literacy through technology? So I think there's a calculus being done right now of do we invest in minimizing the risk? Or do we invest in trying to maximize the reward for as many as many students as possible?

Alexander Sarlin

Yeah, fantastic answer. I love that phrase, pulling feedback forward. And it feels like that underscores the entire approach of pack pack instead of it being summative. At the end, you're trying to, you know, hand in your paper and hope and pray and cross your fingers, you're actually getting real time feedback at the point of need. That's an incredibly exciting vision. And agreed, I actually happen to have the privilege of being on a panel just this last week with the Chief Product

Officer of turn it in. Hadn't they just launched their big AI plagiarism detector. But even through that, they are very bullish about AI as a writing tool. They are absolutely not ringing the alarm bell saying, watch out for plagiarism. They're saying something similar. You're saying this is all about to change. And we want to you know, we want to help the

world through it. And the last question is, what is a resource that you would recommend for our listeners to go deeper into anything we talked about today? And we'll do our best to have show notes for everything we referenced throughout the show. But what would you recommend specifically?

Jessica Tenuta

It's a very quick read, it's a lovely book, a book called Drive believe it's by author Daniel Pink, I can double check that but book drive basically takes the same principles of self determination theory and relates it back into business and learning and kind of like a self help applications of how do you take this academic principle and relate it into kind of every aspect of your life to build motivating experiences for yourself and for

the people around you. Very quick read and super applicable to I would imagine most listeners in the audience, depending on whether you work in business or in education, very useful and makes it very concrete with some great examples that can be shared, depending on who you're talking to.

Alexander Sarlin

Great recommendation. So that's drive by Daniel Pink, we'll put a link to that. Or of course, you can just find it in your Amazon or Google or wherever you find your your books or audiobooks. This has been a really inspiring conversation. And I've been talking to a lot of people about AI recently. And even so I think we've gotten into territory I have not thought about at all. So I'm just feeling a buzz and I again, I'm sure our listeners

are as well. This is such an exciting time for educators, especially if we sort of embrace it if we lean into it. And really, you know, take advantage of the things that this is doing to enhance our collective knowledge. It's really amazing. Jessica Tenuta it's been a blast to talk to you today. Jessica Tenuta, Chief Product Officer and co founder of Packback. Thanks for being here.

Jessica Tenuta

Thanks so much, Alex. It was a pleasure.

Alexander Sarlin

Thanks for listening to this episode of Ed Tech insiders. If you liked the podcast, remember to rate it and share it with others in the tech community. For those who want even more Ed Tech Insider subscribe to the free ed tech insiders newsletter on substack.

Transcript source: Provided by creator in RSS feed: download file