This episode of edtech insiders is sponsored by magic edtech magic Ed Tech has helped the world's top educational publishers and ad tech companies build learning products and platforms that millions of learners and teachers use every day. Chances are that you're probably using a learning product that they've helped design or build. Companies like Pearson McGraw Hill, imagine learning and the American Museum of Natural History have used their help to design or build some of their
learning products. Now magic wants to bring its pedagogical and engineering expertise to make your key learning products accessible, sticky and well adopted. Check them out at Magic Ed tech.com, which is Ma GIC. Ed Te ch.com and when you get in touch tell them Ed Tech Insider sent you. Welcome to Ed Tech insiders where we speak with founders, operators, investors and thought leaders in the education technology industry and report on cutting edge news in this fast evolving field from
around the globe. From AI to xr to K 12 to l&d, you'll find everything you need here on edtech insiders. And if you liked the podcast, please give us a rating and a review so others can find it more easily. Amanda Bickerstaff is the founder and CEO of AI for education, a former high school biology teacher and Ed Tech executive with over 20 years of experience in the education sector. She has a deep understanding of the challenges and opportunities that AI can
offer. She's a frequent consultant, speaker and writer on the topic of AI in education, leading workshops and professional learning across both k 12. and higher ed. Amanda is committed to helping schools and teachers maximize their potential through the ethical and equitable adoption of AI. Amanda Bickerstaff, Welcome to EdTech insiders.
Hi, Alex, really happy to be here. And I love what you guys do, and just really appreciate you having me on.
I'm really, really excited to talk to you today, I really want to hear your story. You've been doing such interesting work. You're the founder and CEO of AI for education, you do all of these AI trainings, these make materials, you have this unique perspective about how AI and education could play together if certain things were true. Tell us a little bit about your journey. And what got you to this point of focusing on the intersection of AI and education?
Well, you know, as if I had a TED talk, I'd be like, do it the hardest possible way, I've never taken like a straight path to anything. And so in this case, like I have a background, I was a teacher in the public school in the Bronx, right out of college. And then, you know, got most of the PhD didn't finish
it. Sorry, grant. And then again, building like ad tech businesses and you know, really had no business background, but just naturally got stuck in really like strategy and really deeply understood what it meant to be a teacher, because I'd done it in a very specific way. And through that journey, I ended up in Australia, I am not Australian, clearly. But I became the CEO of an edtech company that was focused on kind of survey data analytics for
schools. And through that we did you know, it's very hard to run a bootstrapped business, like right before COVID in a new country, new country, mind you, and then be in the most lockdown place in the whole world, because Melbourne had the most
lockdown there anywhere. So we had this really unique, you know, challenge where we had to kind of figure out what we were going to do to keep helping so to speak, in this really traumatic and difficult time, we started doing research, and we did like one of the biggest studies on the impact of COVID
in the world. And through that, you know, started to build this real deep understanding of, you know, how to think about evidence, how to think about unique times and creating actual resources, we created resources, because a whole bunch of resources, so did that and delta knew about being told, but in the process, you know, I was really far from home, it was a really difficult experience to do that. And so I managed to get myself pretty burnt out. So I left Australia and it was really
interesting. Like I you know, becoming a CEO, as a female as an educator is something that is didn't you know, it was kind of a peak and then to leave it at, like, how crazy is it to leave that opportunity, but it was right for me my own mental health and well being. And so I did what people do, I went around Europe and then Japan for five weeks and saw the world and deliberately didn't work. And I came back to the US and you know, we go through these reps,
right? We're like, you know, we're professional, what are we good at? So what am I good at? So I was like, I'm good at building other people's companies. So I kind of did that for a bit where I was like, when through a very Long, you know process for an attack CEO, I went through joining a couple of joining a startup and trying that out and even trying to think about my own general
startup. But what is happening was it kind of, you know, I had this moment again of like this, this interesting time in the world where I use chat to tea for the first time. And I realized these two things like moment of crystallization. One is this is the true transformative technology that we've been talking about forever, like personalization of learning was possible in
different ways. We could take down administrative burden, it's just so clear how much there was the opportunity to really make schools better for kids and teachers. But then the same time, I realized at the same moment that the adoption of this technology was going to be almost impossible in our current, you know, where we live and how we work in education. So I did what any not normal person
would do. And I built a website over a weekend, I named it AI for education, because it couldn't be more on the nose. And I put up a prompt library and some resources, and I hadn't even I had not posted on LinkedIn, and nine months, Alex, nine months, I'm like, Okay, I'm gonna start posting stuff. And through that, it became really clear that it just started to resonate, and it became a company. And that's where I am today. That's amazing.
That crystallization moment that you had there of this is a transformative technology, yet, our education system is really not going to be ready for it, especially, you know, November and, and, you know, December of last year, when it first arrived. I think people are now coming around to your realization, you saw it immediately. And the US Department of Ed and all these professors, and you all these people are starting to say, Oh, I see what's going on here. Yeah, this really could change a
lot of things. If we can sort of get on the right side of it. You know, many educators are super excited about the possibilities of AI, you do see them, but a lot also have a lot of concerns, you know, concerns about you know, all of them about ethics bias trust hallucination. Tell us a little bit about you know, you've been talking to educators for a while now. What are the big concerns that educators have? Just lay them out for everybody here? And then how do
you address them? How do you talk to people about these, some are more legitimate than others, but let's say legitimate concerns about this brand new technology and what it can do in a classroom?
Well, first of all, I think we have been hearing the rhetoric of technology is going to replace teachers. And it's been like that for 20 years. And so every new technology, MOOCs, OER, you know, like it was the one to one devices. And so there is this like, deeply seated rhetoric of like teachers are on the bubble and have always been on the bubble. It is fascinating to me how little trust and how little we support teachers, as professionals, like we don't
consider it a profession. Even though there are three things that unite us as people one is that we eat. One is that we need healthcare, and the other is that we learn. And so it's fascinating to me about how little we actually privilege educators. And so when you think about this, like one of the ways I'm gonna kind of split this a bit like we start our one of our conversations with allowing educators to talk about their concerns and fears in a
productive manner. We all have, if you've met a teacher, you know, there's that kind of like complaint session that can kind of happen we when we like, talk about, like, what's happening. And so that's not particularly productive. But we do is we structured and ton of this Myths and Facts base, and where we actually go through kind of common myths and facts, including that kind of ending
space of it will replace me. And what that does is it allows educators to, to not feel like we're just telling them, this is the best thing in the world, or this is the thing they have to do or this is the risk associated but instead saying, Hey, we're listening to you, we understand this is deeply, deeply going to change your lives as educators, let's talk about it first. And then let's figure out how to work together and some common, you know, fears and concerns are the common ones
where it will be misused. And so students will use it in ways in which they will cheat or they will have it do their work for them. Many teachers suddenly had a student that was like, suddenly like talking like, you know, I was joking, like they were, it's kind of like if you write something attractive, etc, you suddenly have a British accent, you know, you're suddenly like effective like, Oh, I'm not gonna monocle and like, you know, suddenly became
someone different. And so that was something that became present and you know, it especially since there are a lot of organizations a lot of like, I spoke to a college where they didn't put anything in their academic integrity policy and they got to the finals and and kids were using it, but there was no way to really talk about it and they kind of wet their hands for the entire semester.
So that's one and with that, I think is a deeper when people realize that it's like almost less than the cheating aspect because teaches cheat when they
cheat, right? I think there's like 50% of students cheat is kind of the, you know, the exception A statistic but it's that cognitive offload piece like are students going to be offloading the important cognitive components of what they need to build their neural network to build their reps, so to speak, and, and be able to do the common things that are necessary to be successful in the world in which we live. So that's one and those two things,
I think, go hand in hand. The other concern really deeply is like, is this just another technology? I just went through COVID? What is happening? Why do I have to use it later, I see statistics like 2200 ad tech tools in a district. And I mean, when we did our research, we saw that people are using 15 tools, sometimes just to do online learning. And so I think there's just deep fear of like, why is this just this other thing? Like, I barely, like, I don't want to see Google classroom
anymore. I don't want to see this anymore. So there's that kind of concern that the code fatigue? And then the last one is, of course, like, what does that mean for me? And what will I do and it's kind of fascinating to see people's reactions, were they there are a couple people in a room that tend to start to realize that this is not, you know, a step change as a magnitude change. This is a paradigmatic shift.
That means that the two things that they're teaching their kids right now, possibly don't matter. And those people have like a moment of like literal, like existential dread. It's like reading Kambou in the middle of a PT session, but they're very, very few and far between. And so but I think that those are kind of the buckets that I see.
So I'm hearing you say, three big buckets. The first is about the knee jerk reaction to sort of plagiarism and cheating. And I love that phrase cognitive offload, like, you know, our students, not thinking, are they not? Are they not even making sense of the material, just throwing the, you know, teachers questions into into agenda AI tool, and then get sending the answer back and just acting like a, you know, a conduit that's big, but you know, has all sorts of stuff
around it. Then there's this, this existential dread, this idea of is this gonna replace teachers and then one, I think we don't talk about that much. But it's obviously core to all of ad tech, tech fatigue, just, you know, we're in a crazy period, the Learn platform, kind of studies about the 1000s of tools, and hundreds of tools per teacher are real. I mean, that is what your life is like, as a teacher right now. It's not like
just a number. So yeah, the idea of a whole new technology coming in, you know, yet another transformative quote, unquote, technology might be met with skepticism, but it sounds like people in your sessions have these sort of aha moments as well, where they realize, like you said, this is a potentially a paradigmatic change, this might not be, you know, another Kahoot that you can use for fun, it might not be another, you know, Duolingo or whatever, you know, so many different edtech
tools and other LMS this is like a change to everything to their jobs to their kids life to their kids jobs, or their learners, you know, everything the parents. So what do they do when they have that realization? Like, if, when they sort of wake up to how big this is?
So I want to do like a mid journey, like photo essay of like, all the reactions I get, so I'm very big on like, show not tell. And so I was at a conference a couple of weeks ago, and, you know, if a school leader walked by, I was like, We'd like some candy or to talk about AI. And if they came over, I'd be like, have you used chat GBT? And if they haven't, I just pull out my laptop. We do it together. And I'd everything from like, stoic, that brain going 1000 miles a minute to
literal hands up mouth open. One woman actually took a photo of her principal in this moment. And what I just asked him, I was like, What is the thing you do? Do not like doing or your, your teachers do not like doing because I started with rubrics, I hate rubrics. In fact, the origin story of AI for education is rubrics suck, but like chatting, she's really good at them. Yeah. And so in that case, what we do is I call that like
the lightbulb moment. And this is if you're listening to this, and you're thinking about building generative AI or you're an educator, this is not like I keep I joke about it's not better Google. And so I think people think about what this could be as better Google. In fact, it is worse Google, it doesn't even connect to the internet. And it makes up stuff, okay. And so it's not better Google. But what happen is like people will use it as it is. So even if they do use it, they're
using it incorrectly. They don't like we have never been able to democratize access to build in creating an understanding through technology with this moment in time, which just acquires natural language. But natural language means a conversation and we're not used to having conversations with
technology. And so what ends up happening is even with Siri you ask it a question it gives you an answer you ask it a question, it gives you answer instead of it being like hey, you know chatting to people that like at the pro modifies it with names or chats and other things, but what we do is like it actually is like a conversation. It is a bit of a day It seems like the better you are at asking questions or prompt engineering,
the better the outputs are. So the way that we try to shorten that light bulb moment is we created this prop library. And so it has about 50 prompts right now that are from everything from lesson planning to emails to newsletters, student facing activities, but all you have to do, it shows you an example,
that is fully worked. So you can just cut and paste it in an example that you can create for yourselves, right, like, so it's not, you know, I was thinking about high school biology, that's what I taught, but you're gonna reframe it, and then make it work for you. So this is how you remix it, so to speak. And then it's just literally cut and
paste that bad boy in. And now we're gonna start differentiating between the ones that work best on Claude versus chat to be T because now we actually have an alternative chat team t that is actually better at some things, that that may, if you get to that moment of crystallization, and all of a sudden, it's like a superpower
or magic, and it's not. But it is something in which if I can save teachers, two hours a week, or five hours a week on the stuff that actually I'm sorry to say this to whoever loves lesson planning, it doesn't matter that much realistically, like, you know, it's like, you know what the best lesson is about the moment in which you're, you know, your students are in that moment, and the context matters, the quality of the lesson matters, but it's so much more
complex than that. And so I just think at this point, you know, we can, we can save people time, we can lower administrative burden, and we can lower our cognitive load again, but in a positive way.
So much to unpack in there. I totally agree. And that is so interesting. The idea of natural language and conversation and, you know, people sort of bucket into a schema they already know, which is search, and especially because being used as chat GBT. So it's like, oh, you know, I asked him something, and it gives me an answer it gives me you know, it searches and gives
me the answer. And I think it's so important to get people over the hump of understanding that it's not only not that, and it's conversational, you can have it interview you, you can have it make a tool for you. It is like having a super powered assistant, let's let's put it that way. Maybe not a superpower for you. But it's like having an assistant that can do virtually anything virtually instantaneously. Especially the things you don't want to do like
rubrics and lesson plans. I've seen a few of those moments, but you've seen a lot of those aha moments. And it's really, I hope, I do think that the people who have those moments then go back to their classroom and use it or do you feel like it's still feels like the magic is, is sort of in the air, but not in their normal lives?
My because I have two anecdotes. One anecdote is I was doing a professional development in Queens, 100, teachers end of the year, so man wanted to be there on chassis BT and introduction to AI. And one of the first things we do is is AI in your pocket. Like, if you can open your phone with your face, then you are using, like what we consider like classical AI, predictive text, etc. And there's one gentleman whose physics teacher is like, I can't do this, I only have a flip
phone. And I was like, immediately, like, this is my favorite guy. And this I would like to play to, like, you know, the person in the back with arms crossed and like feet up, and you know, just like, No, this is gonna be terrible. But we did this whole thing. And you know, you roll with it. And I was showing them like prop library. And all of a sudden he screams at me ask it, how it can use dendrochronology to help with a
crime scene. And I was like, Okay, first of all, I hope that you're a forensics teacher, because otherwise it's getting very dark very quickly. But also, like, he literally was so engaged, and he could have taken the whole session to just yelling at me, because he was so engaged with being able to do this. And so I do have strong hopes, that that leads into like, how he's thinking about
his classroom. But then the second one is, I had a principal that I met who really sent me an email was like, you have to show me that website again, because it was so great. And it just like, was the best thing I ever saw. You're like, oh, man, you know, this is what we are going to contend with wishes people that really deeply do not understand technology that are
running. I think she was actually not just a principal, she was a district leader, and anything that it made me smile, but it also gave me that little bit of fear of like, okay, this is what we have to really think about when we plan. But I think that also we can use Australia as an example of people using or not using because they're in the middle of their school year, right? So they're in term three, we don't know, Australia and South Africa run January to
December. And so they're they they have an opposite calendar. And so it does mean that Australian schools are in the mix right now. Unfortunately, public schools have banned and they're they're not going to release like Chattahoochee until next year, in some cases, in some states, some states are more open to it, but what we're seeing actually is like the implementation is happening, but it's happening significantly more in independent schools that have not blocked actually retain
or other gender via tools. So I say that like, I'm moving a little bit away from this idea, do you think they're gonna go back classroom? I'm very hopeful that they will. But I do think that there's a real question about equity about, like, who has opened it up and who hasn't, and then where that opportunity
is gonna lie. And so we're seeing that already in Australia, that kids probably needed the least, our head getting the most access to it, not only potentially at home, but also definitely in school.
There was a really interesting article this week from one of the Chancellor's of the New York City schools that basically said, Hey, we know we've been charged CPT, or we put it on the list of banned websites that you had to request as a reaction to it, you know, halfway through last year, and they were, you know, made a lot of headlines.
And they say, but we also know that this is a transformative technology, we know that it's an equity issue, just like, you know, you've just said that, you know, denying low income schools access to this tool that very well might define the next 20 years is not doing them any favors. And it was a really
thoughtful article. And it gave me a little bit of hope that there's a sort of, I don't know that quite a hype cycle, but like, a set of thought processes that can go where your first reaction is, you know, another Tech. And then the second is, you know, I keep hearing about this thing. So maybe there's something and then there's, oh, my God, what can this actually do? Then there's Oh, my God, my kids are gonna cheat on this all
the time. This feels like this, this set of thought bubbles, and hopefully, down the line. There's, this is amazing. And I think this could make me you know, happier and better educator. Like, you've seen people reach that end state, I hope.
Yeah, and it definitely Well, I think he's gave me an idea for like pieces, like the five stages of AI adoption that goes through fear, denial, like acceptance. But I'll just say in New York City, you still can't use it on the like desktops. So yeah, you can use it on the like, the Wi Fi on your phone, but you can't use it in the classroom. So and so we say these things, but they're not in process. And it was quite
funny. And I just want to say like, I date myself, I did this session where I was teaching about generative AI in a school. And we couldn't get on, I couldn't do chat TTS, we had to kind of hijack, you know, the system. But the best part of this is that we're using a smart board. But I taught so long ago, I've never used a smart board, some teaching about generative AI, but I can't figure out how to make images, bigger text
bigger on a smart board. And so that's like an indication anecdote of not only how old I am, but also this idea of like, how technology is adopted, when it's adopted. And then like how, in this case, you know, smartphones have been like 10 years, and we're now going to see, again, this this massive step change. And sorry, you think see things like Merlin mind is thinking about integrating, you know, into the smart boards and their things.
But like, it is funny to see we have these specific moments in time where it does transform, but this is way different than a smart board.
Yes, but there's still that sort of shock, you know, when something is new, and it's not what you're used to. I mean, I think we saw that with a common core when kids brought home math homework, and their parents were like, I literally don't know how to do this, obviously, it's crazy and wrong. And I'm gonna know, we all have it. I'm not trying to,
you know, put people down. But there's always this sort of cold water on your face moment, when you realize that the world has moved on, in a way that you might not be used to. And it we have to all accept it. I mean, one of the things I really love about your work is it's so humane. I mean, like you said, you like talking to the skeptics, you like, you know, you're not trying to sort of evangelize in a way that feels like big tech person trying to sort of explain why they're
changing the world. It's really about getting people on the same page about what this thing is so that it can do it in a really in a positive way and not be so afraid of it. It's like it's really exciting. You mentioned personalized learning, sort of in passing there. And I think you said you realized immediately upon using chat CPT, this is the tool that can actually allow personalized learning to actually happen. I
totally agree with you. I'm curious how you've seen so far AI start to have that kind of, you know, personalized learning experience and make an impact on student outcomes or sort of just even get students more engaged in their learning. Can you share some success stories about you know, students reactions to AI in the classroom?
So what I'd say is, this isn't what I say until like everyone, we're there for really, and so I don't think that there are really too many fit for purpose tools that are built on the foundational models of open AI and they know Claude or llama. And so a lot of what I've seen that kind of works are experimental and our students
doing it themselves. So I'll use an example of actually so there is a an organization that builds like a marking tool for these high stakes tests in the UK and put it into the world and was like, oh, teachers will love this. But actually what happened is that like a 11 o'clock at night, when a student was preparing for these high stakes testing, they used it to grade their essays to then understand what they had to do to improve. So it actually used significantly more by students
than by teachers. And so I think that that really is a good example of like how we create something in the nucleus to the world does the exact opposite of what we thought it would. But that's, you know, teachers aren't that available, we get burnt out and tired and grumpy.
And we sometimes have biases, because you ask a lot of questions or you look a certain way, or you are you into the dean's office, or you are a black or brown and you like you're a girl in a math class, like, there are all things that
happen. And so there's these opportunities to like, you know, chat, UT, it takes about an hour for it to get tired essentially, like and you can ask the same question 1000 times, it will never get frustrated, it will never tell you it's a dumb question, it will never stop answering. Okay. And that is something that I think has been
really helpful. For what I've seen, we've seen talk to students that have used it for like spaced repetition, or they've used it for getting $20,000 worth of scholarships, because they knew how to use it quickly, or along the lines of where it really supports them or their building. So I'll give another example. And I'll call out Christian plactic Christians yesterday, he is 15 years old, he is has built something called ACE flow.org. That is a tutoring app. And it's very in the beta a
closed beta. But he was able to build this tool with like at 15. And we just had a conversation about like, you know, i chi growth companies, 15 year old, you can't really do high school and high growth company, no matter how much we want to It's hard enough to do that when you have one drop. And that's it. But he was a great example of like the personalization extends past the classroom and to building because again, this democratizes, our ability to build and will continue and
enterprise tools get better. And we have an end to end suite of how to actually hopefully use foundational models that have worked on hallucinations, bias, etc, we'll be able to do better training, we'll be able to do better fine tuning, we'll be able to do better, like reinforcement learning. And that will be something in which these tools that I know they're being
built. And now that you have some you know, the work, you know, brunette koala, I work with a couple people like reading rocket, oh, co labs, others that are doing like either early reading or early math. And I really believe that personalization technology, it's going to hit the K to six space and the next year. Question about what happens in the seven
to 12 space. But those foundational skill sets and you know, earlier getting early math, I think it's going to be transformative, like really soon.
Yeah, I really hope so that's a very positive outlook. And I hope so too. I think people are softening on the sort of really gut negative reaction to it being used, especially if it's being used inside a tool that's already considered to be safe or considered to be, you know,
common in the classroom. One thing that strikes me as I listen to you talk about some of this is that part of what is so fascinating about this LLM world, and I've been trying to you know, go as deep as I can and to learn about what's going on. You mentioned sort of fine
tuning in there. And one of the things that I think people may, I'll say, I'll put myself out there that I don't think I fully realized when ChaCha beauty, you know, launched is that, you know, you think of a church of beauty or a Claude wood from anthropic as the sort of all in one to everything, you know, engines. And actually, there's a huge, we have an incredible ability to do very, very, very purpose built, LLM that do like one or a few things incredibly well. Those are actually much
smaller LLM. So they're cheaper to make, they're cheaper to train, they're faster to train, and you can you can sort of swap out use cases on the same LLM, like there's a lot that you can do to make a tool that does something specific. Incredibly Well, that's part of why we see all these lesson plan, you know, creating startups is that some of them are just using prompt engineering, I think others are getting deeper and saying I train a model to do like nothing
but that. And if you train a model, there's nothing like that. It's incredibly good at that. I'm curious how you think about that. Because it's something that I don't think as educators and in the education community, we've really embraced, it's like, we're all building UX on top of big models. And there's definitely a lot there. But you can do a lot with LLM for a specific use case. What do you think?
So there's a ton in there, what I would say is that, you know, a lot of what I would call a land grab, and we see the land grab what we would call a chat, TBT wrapper or like, and so it's a interface with prompt engineering on top
of open a eyes API. And what we see with that is that it actually can be pretty cheap to do, but it actually has a cost, which is why you're being asked for $10 a month no matter if it's pretty bad or only kind of good, because it doesn't it's like a lot of other technologies you can do for almost no money, right? But because of the compute costs, and especially if you're using GPT for for the complex prompting it can get quite expensive quite quickly.
And so those are really interesting because those foundational models also have significant issues around hallucination. Is it also questions of privacy, I struggle when I talk to an ad tech or other technologists who's building, they get really kind of cagey and like, oh, we have our own model. And it's seven different models. And we're using this for that, which means your data is going to seven different places, realistically. And so you need to be really
careful. And we're actually publishing a kind of six questions to ask a text around generative AI that go into at least nation bias, privacy, transparency, effectiveness and the human layer. But so there's that, but then what you're talking about is like, there's a couple of really cool things, these smaller models that I'm trained on just your corpus of
data. So a walled garden retrieval augmentation, it's just trained on that, you don't even have to use a chatting tea wrapper, what you can do is you can use like a vector dB, where you're just doing your database. And then what's happening is, the cool thing is, is that we're able to query unstructured data with language. And so it can get
really cool. They're still like, if using, you know, these giant AI chat bots, you still have hallucinations, because what will happen is you'll ask it a question about a piece of data that doesn't exist, and they'll be like, but it could make it up. And so there's still those kinds of questions that are very hard to train out. But what it does mean is that there are some models that are being built and Claude is a great first example, if you ask it for a URL, it will actually tell you that most
likely that is incorrect. And so there are these degree of confidences, or the ability to say I don't know is starting to be built into these models. So that's super cool. So we're gonna think about this. And one things as a researcher, I'm very nerdy, I get excited about accessing unstructured data, like, you don't have to go to Deloitte and spend $150,000, for, you know, a paper on your qualitative analysis, like you could actually do this, even now with code interpreter and DBT,
for to pretty good fidelity. But at the same time, even cooler than that. So as soon as, for example, like if you're an education institution that has a whole bunch of content, you can create your own kind of ability to query that or data to do
that. But then the other end is we talked about equity, these systems are like smaller, that can be run locally, you don't have to have high bandwidth, you can be on a mobile device, or on your computer, in a like a low bandwidth area that doesn't just happen in you know, places that we think about, but it happens in Appalachia, it happens like in Australia has terrible Wi Fi
in certain areas. But you can actually have these tools in that, you know, that are local, which means that you don't have to have internet. Like, how amazing is that in terms of democratization. And so there's not only the like, better at giving you the things that you want, and getting really, really, really good about providing content suggestions, or student practice problems, or practice tests or whatever you
might be doing. But on the other end, like you no longer have to be in a high bandwidth area to get access to these corpuses of knowledge.
Fantastic, really comprehensive answer. I've learned a lot from that, you know, just building on your last point. I think that some of these answers that we all need as an ecosystem around how do we maintain privacy? How do we maintain, you know, access, you know, if you're in a New York City School, how do we make sure you can, you know, still learn from these kinds of tools, and you did not just something you hear about when you go to college, or you're in your first
job, that's not good. It feels like it's this weird convergence of like, the techno optimists of this moment, are also potentially at least they like
to see themselves. And I agree with this, as those trying to fight for equity, the same way that, you know, teenagers created their first web pages when when the internet started, and those teenagers went on to push it over on the world, this 15 year old, you're talking about who's already built a company based on Gen AI, like, has an enormous advantage in life, he's learning tools that are going to help him for ever, that augment almost anything he
could do. So I guess the question is, you know, how might we use things like six questions
you just mentioned? Or, you know, how might we build trust in the ecosystem so that, you know, edtech companies, trust educators, and they, you know, realize they have to be entirely trustworthy to actually break into schools, they can't sort of, you know, color outside the lines, and how can we get educators and administrators in, in higher ed or in K 12, to sort of see this and how do we all start rowing in the same direction and not let this thing sort of just bounce around and
media to blow it up?
Okay, so this is where I might get in trouble. But I think that like if you think that open AI, or Microsoft, and those are and they're doing this for the good of the world, it is not true. They are doing it to make money. And that is what it is. And it is something that is incredibly important. And I've seen some pretty irresponsible things that have happened. There is no ability to track hallucinations or to report them and open AI. There is no ability to report
bias. And this are things that are not like every once in a while, it'll change. And it'll say like, Hey, here's a new thing you should read that realistically, it doesn't do that you see that meta said it was open source, it's not open source, because it doesn't want to be open source for people that can make a lot of money off
of them. You see that? You know, Microsoft has opened up their kind of copilot in Australia without really talking to them about what that means in terms of what the model can do or not do. Right. And so I think that there is that component of these tools are not made for schools or education mice, one of my favorite things is like Sam Bowman went on a road tour. And he talked to people and he was like, came back and said, You know, it's important education.
And you're like, okay, yes, but realistically, this is one of you had to have known in your beer ballroom that education was going to be one of the main use cases for this. And yet, it's like, it's a surprise. And so that's one thing to say. The second thing to say is that I think we have an enormous problem in edtech, in which we do not nail on the head, we do not trust educators, we do not listen to educators, we do not listen to teachers, and not
listen to students. And so what we do is we find a solution, and they work backwards to a problem. And that is really like whether you are a technologist that thinks that oh, I'm gonna make an Instagram for kids or whatever you think is going to be the next big thing, because it's a whole large Tam, you know, a total addressable market is huge. And I think that what's happening is, then we see this like, really fragmented market. And also people don't play nice
that often either. And you just don't see a lot of like deep collaboration, even when roll ups happen, mergers and acquisition, which are kind of considered the only way to really make money. Sorry, you can see my fake air quotes, because we're on a podcast, but I was glad to see it. But you don't necessarily even really kind of create these systems that are like true ecosystems, which is why we have 1000s of
tools. So that's all to be said about, you know, where I think there are some massive issues. But then, what's fascinating to me is that what you just said should be taped, like safety should be table stakes. And it's often not. That's why I say to schools do not, and I'm sorry, if you're an ad tech here that's trying to make money. And generally they I, I say do not pay for anything for the next
six, eight months. Because at this stage, they're using foundational models that are FOD that are not safe, that have bias and hallucinations, there's no way to get rid of that it still happens. It doesn't maybe it happens only a short period of time. But these are not fit for purpose tools. And so while they can be great for certain use cases, they're not safe yet. And if they're not safe, you should not be buying them. And I just think that that should just
be the underlying principle. And then one of the questions we have is like, proven effectiveness, whereas this worked, but not always where it's worked. But how do you ensure that you're getting feedback and reviewing and actioning teacher and student feedback as part of your process. And so I think that this is just a huge, huge problem that we have. And I know how hard is like, so anybody out there, like that's an ad tech person, I was an ad tech CEO.
And I can tell you, I can write a whole book about how to mess up a tech rollout for schools. I'm your woman. And so I understand how difficult it is to build because of the fragmentation of data interoperability districts and you know, schools with different processes. But at the same time, we have got to prioritize building tools, regardless of their gender, navy or not, that actually are safe, that actually are fair, that actually are
responsible. And that work. And if you don't do that, then please find like, you know what, you can make a lot of money for enterprise software, b2b enterprise a lot easier. And tech, and so sorry. And I don't mean that people shouldn't be doing it. But it is something that like we have so much out there, that doesn't actually help.
Yeah, I think the two big sort of gaping holes in the story that you just named, which is safety and sort of the table stakes of privacy, safety, you know, the things that you sort of just need to be in a school and not be liable for things or immorally, you know, doing something that's going to hurt students in some way. There's that whole thing,
which is huge. And then there's the not having that many positive examples, which you just named, you know, where have we seen learning outcomes go up? Where have we seen a whole school of, you know, 15 year olds creating their own companies or solving societal problems? It feels like they're really connected to me, personally, because I think what we're sort of seeing is that this is such a new technology, and people are trying to figure
out how to use it. And this wrapper, this wrapper thing is by far the easiest thing to do as a tech, you make a wrapper, we just put out a newsletter piece about how a lot of small companies can go really fast now because they can use the whole power of these tools to make content or to you know, give
responses and chatbots. That's amazing, but it's got a big problem, because it is not yet safe and I guess the question is, you know, if the kids don't have enough access to do the amazing things, and I think the educators don't have enough access to do the amazing things, either or enough training, how can I think it's gonna start with the safety. I've said this for a long time on the podcasts. I'm ranting here a little bit, but we are vulnerable right now.
Because the first time something really bad happens because of AI. And it's literally any moment now. All those fears that all the educators and district leaders and principals have are totally validated. And there's nothing on the other side, no matter how exciting it is to look at, you know, mid journey or to how amazing Chad's UBT is. As of right now, I feel like there's very little on the sort of, but here's why this is so
amazing. We can't ban it. How do you think about that, like, it's a chicken and egg problem, right? How do we make it safe enough to be used in classrooms? And how do we make educators excited enough about the possibility to basically get the ball rolling and make something big that safe?
So I feel like we're unfortunately in this like, timing, difficulty. Because, you know, one of the the pieces that sits with me is that there's this human creativity tests that you know, theorists believed would be completed by an AI in 2050. And now the next estimation is 2024.
And so one of the things that I say a lot is like we've been caught flat footed, but not only kept foot foot in the sense of like schools and leaders and educators, but in terms of the technologists that are building, so there was no like, gradual release of this technology that could lead to like better and more safer and more thoughtful approaches to implementation. So what end up happening is you had to go kind of live in November, but then there was such a huge step change between like
November and January. And we see these models and like, there's a additive part of like, you know, not only is generative AI, like, I think it's a fad, but like, it helps us code. So by helping us code and build, we're actually doing these things faster. So it means that we're actually seeing an acceleration of technology,
not just in generative AI. And so I think that what's happening, though, is like and is this I can't come up with a lot, which is when I go and talk to schools, I think they are kind of asked like, do you have tools that you suggest, and I kind of hedge and say right now I think that we do capacity building, we use generative AI and positive ways. You know, if you're teaching kids under 13, you demonstrate or show the
possibilities. We've got our curriculum around digital literacy, all those things, but we have this gap. That's very tenuous. And so I was talking, for example, I was talking to the entrepreneur who's building an AI grading tool. And they have so many good thoughts. It's like so thoughtful in the sense of, you know, ask people to kind of think about their bias, tells them it's a first draft, but I'm like, what happens when someone doesn't use it as their first draft, it is their final draft
or only draft. And what happens when, you know, 90% of time, it's right, and people start to recognize it's right and stop caring as much. Because if you go in and you grade someone incorrectly through these tools, you can significantly degrade the relationships in the
classroom. But you could also potentially really piss off a parent, if you piss off a parent, there could be an this, this kid now fails something that could mean that they don't get to college, or they miss a scholarship, or whatever it may have you that could be
enormously damaging. And so how do we ensure my question and what we're thinking a lot of the average question, just how do we actually think about helping schools think through adoption strategies, that hedge so for like, if you're listening, and you're a school leader, please put something in your, you know, academic integrity policy that says something about not just chat TBT, but literally about generative AI usage and do it before the start of school year, even if it is a placeholder?
Because I'm talking to people, they're like, we'll do it in October, and October is going to be too late. And so I think that this is the the crux of we're in this kind of really difficult dance where the people that kind of understand and know realize that that technology is not fit for purpose yet, but we can ignore it. Or we can say yes, do this completely. And so we're in this back and forth of like, how
do we do this? So all we're trying to do is put as much into the world of like, safely responsibly adopting, we're trying to partner with schools and districts, but a lot of what we do is just free Alex, we're just putting out free stuff, which apparently, as an edtech, CEO, I really was good at making money for other people. But when it's my business, and like, everything is free, I become Oprah. And instead of cars, it's like a student AI policy and a free course, those types of
things. But that's part of the reason because critical using of this is the only way I think that we can actually get to a point where people know that it's not ready yet, but it could still be really great in certain use cases.
Absolutely. I think I couldn't agree more with any of that this tenuous place is so I find it very sort of discordant because I, you know, I see these projects, you can ask Chad to be T to, you know, create a character description for the character description, image journey, put the output into runway and basically create your own character that can perform your own, you know, plays, the whole thing takes an hour. I'm like, that should happen in schools. But it can't,
and I get why it can't. And we're just in this funny place.
And I always feel like the big missing piece is the safe educational LLM is getting away from open AI getting away from, you know, I mean, I don't know much about Claude, but like really starting to say we need a purpose built, built for education system that knows about academic integrity, it knows about privacy, it knows you can't you know that if you start to upload an IEP, it's going to tell you, that is a legal contract, get that out of here. It doesn't save your work,
like we need this stuff. But to your point earlier, the big companies, you know, education is not their primary use case, even though you still see these amazing things like open AI giving Khan Academy and Duolingo early access to it Khan making this conmigo tool. And then as of this week, conmigo making a deal with Instructure to put conmigo inside Canvas, which is 40% of all US classrooms like Okay, wow, suddenly, you have a hopefully vetted, safe, hopefully, hopefully vetted,
safe tool, very widespread. I don't think people are even seeing that coming. So the work you do is so important to sort of set the stage for how to be skeptical. How do we sort of optimistic and skeptical at the same time? Does that make sense?
Yeah, we talked about responsible optimism. Because we were talking we were saying cautiously optimistic, but it seemed not strong enough in terms of like, we are truly optimistic. But we also believe that responsible is the number one most important thing. And there are things like conmigo is actually kind of expensive and still built on GPT four, and you
can hijack it pretty fast. But Merlin mind is you know, they're giving classroom focused LLM that lowers hallucinations, learn bias and uses a walled
garden approach. But will we see like Google come out and say, Hey, we're building this Flm, specifically for a medicine, you just wish that like I just there's just that moment where especially these open access, you know, traditional models that there's just that care of like, and you can make money in this, like, if you did it right, you could make money in this, but I wish wish that that was
something that happened. But for us, like it's actually quite funny, because I'm joking, like, I'm gonna get a soap box and then put AI for education on it, because I really liked getting up there. I really enjoy getting on my soapbox about this. But at the same time, like this idea of responsible optimism allows me to feel confident that I'm not part of the echo chamber, that this is the worst thing, this is the best thing. This is going to change. Everything is gonna
change anything. But what you're doing is like we're saying in some cases, oh my god, yes, lesson planning right now go do it. It'll save so much time. But then at the same time, do not rely on it for facts, research, URLs, or grading or whatever. But we can go back and forth. And we can and I think this is what's so fascinating about kind of, like rhetoric and gatekeepers is that it's sometimes Philly, you have to
say the strongest thing. And whatever that strongest thing is, you go and you die on that hill. And we don't have nuanced conversations, as like people that are talking about this. Like, I want to have nuanced conversations, because I actually care about this not like you know, people don't but I have the pleasure and privilege of thinking about it every day, and keeping my skills up to date, that other people I understand do not have that
ability to do that. And so if I can't be nuanced, then all I'm doing is adding to this echo chamber of like this de luz have information that is extremely hard to navigate and to identify what actually is worth listening to that saying that do it perfectly. But that's our goal.
We are at this point where a lot of the tools are designed for educators. And that is smart, because any tool designed for Educator by definition has a human in the loop, right? It has a trained professional educator, making sure that the output of this is not hallucinatory. It's not you know, ideally not biased. It just isn't, you know, isn't inappropriate. And I think you know, you may not agree with this, but I feel like that is as far as we can go at this moment.
But I'm a big fan of Berlin mind, we've interviewed such a Nitta from Berlin a few times we have an episode coming out with him quite soon. And I think what I love about their approach so far is that they're looking at it from the actual problem side backwards, rather than the possibility side forward. They're saying this needs to not save student data. This needs to not provide it appropriate responses like it has to do that or everybody's liable.
Everybody's, you know, there's just big problems on the horizon there. I've heard that students are using Google comments right now to like, bully each other, like kids can do anything good and bad. But you can't put a chaotic system and another chaotic system, which is I think, where we're at now.
Oh, man, chaos plus chaos. That sounds like that was my experience. Exactly. I mean, I think you're right, I think that I get a bit annoyed about some of these, like surveys of like very specific samples that are like teachers use it more than students or students use it more than teachers. There's not been a representative sample study on educators or students, they have been either ad tech focused, or certain types of foundations, they've also been extremely low sample sizes, and I'm sorry, 3
million people. 1000. I think that we have like some flawed survey protocols. And as someone that like, is a researcher, it's something I weirdly care about, but I do think but the problem here is, is that it says 62% of students will use it for homework, and it says 50, you know, more teachers using that
students. And what that can do is create these false senses of security that teachers know what they're doing, or these false areas of concern in which students are like, when you create a survey, and you tell students, you will use it for your homework, you're telling them they can use it for their homework, and then they're like, yes, I would love to use it for my homeworks are actually you're teaching them through that moment. What's possible there
your survey question? And so does that mean that student had intent before that? That is a question that's up in the air. And the reason why I say that is we just don't have a firm understanding of what's on the horizon, because we don't know who is using, how they're using it and what they're thinking about it. And it has to be a priority as like a education stakeholder group, to get to that as quickly as possible. And then working set and work backwards from that into what
tools are actually created. You know, I don't want to see an enormous like up sick of Amazon deliveries of blue books. And suddenly, kids are like doing 1000 oral exams, but that's a possibility. And maybe it's going to be important. Well, I think it's maybe it's important for some things, right? That you like, will need to kind of have on this, this retraction of this, like focus so that you can then create better processes
going forward. But you know, what happens if that becomes the next five years, which are kids, you know, industrial model, but now they're standing up. And they're doing recitations. Like they're in the 70s, you know, the 18th century in a school house, you know. So anyway, I just think that it's so important for us to come together and actually accurately understand and start to deeply understand what this means and then work backwards and solution
from there. Instead of thinking everyone's going to use it for x, we can show the possibility but like, you know, one of my favorite things. So Matt from OCO labs, had this great way of framing a question, instead of like, he went out and he said, If I could give you an extra set of hands to teachers, what would you do? And I thought that was such a beautiful, like way of reframing this guy got a problem
statement piece. And what he heard was that if I had an extra set of hands, they would be doing small group instruction. And, and he went, and he built something that's focused on small group instruction that uses voice. He uses natural language processing, sentiment analysis, computer vision, but like, we need to do more on that. Yes, it's really been a big position.
Yeah, a very positive note, to end our conversation on, I could go on a lot longer. There's a lot to talk about here. Maybe we can do a part two, but Amanda Biggerstaff, you know, AI for education, obviously thinking so deeply about this. Let's wrap up with our two questions that we end every podcast with. First is what is the most exciting trend? We know we cannot say just AI?
The most exciting trend you see in the EdTech landscape right now that sort of like maybe around the bend something that you're seeing, from your perspective, talking to all of these educators and knowing so much about AI? What should people keep an eye on going forward?
I think we talked about it a couple of times. But this idea of like deep personalization, that goes twofold. One is that it focuses on the student's zone of proximal development of creating these pathways to gaining competency and challenging but achievable ways. Plus this ability to create engagement in the sense of not seat time or eyes on tasks but like I love Pokemon, or unicorns, or food, and now that's going to be deeply part of what I'm doing.
Like I'm going to be an I can be the lead character in my book like I'm no longer the best friend. I'm the unicorn like Amanda Bickerstaff unicorn doing our thing and I'm Reading reading, or I am deeply having conversations about math because I want to be an entrepreneur and for me, like, I would have loved and understand financial modeling over trig. And so I think that this is where we're going to. But we're gonna see that. So I'm excited about that. Like, I think we're not quite
there. But I'm excited to be able to start having those great conversations and showing people those tools, and you know, six to eight months to a year. Yeah, absolutely.
I just talked to an AI specialist who basically, he worked with his kids, and they made digital twin avatars of themselves. They look like them. They're trained on their voice. They're trained on their data. So they can literally just ask themselves, like and have conversations and learn from themselves. It's one of the craziest things I've ever seen. What is one resource that you would recommend for somebody who wants to dive deeper into any of the many, many, many
topics we discussed today? We will definitely put links to all of the resources for AI for education, there are so helpful these prompt libraries, there's so many different great resources, but what is one external resource that you would recommend for somebody who wants to understand this world better?
I really like Ethan Malik's work. So one useful thing. I think he is really thinking deeply about AI in education. He also like loves experimentation and does some wild things. And so I think it might be slightly too advanced in some ways to think of it as like a teacher, you know, like it needs to be brought down a magnitude two or less technical person, but for ad tech people out there that thinking about AI and education, Ethan does some great stuff. And he does it with
a balanced approach. And he definitely is one of these responsively optimistic people I'd love to like a goal is like I can get to a point where like, Ethan would talk to me, but I think that if you're gonna follow someone who you know, is great on LinkedIn, he has his one useful thing substack as well.
Fantastic. We will put links to both the substack and his work. He's published some papers on this as well, in the show notes for this episode, along with all sorts of links to everything that Amanda you're doing at AI for education. i You said Amanda Bickerstaff unicorn and I was like Manda, Bickerstaff unicorn. Interesting title. Thank you so much for being here. It's been a
fascinating conversation. I'm so glad to be able to talk to you I have a feeling we're going to talk again a couple of times because this is a field that has so much potential if we can be responsibly optimistic. Thanks for being here with me on Ed Tech insiders.
Thanks, Alex. It was a lot of fun.
Thanks for listening to this episode of Ed Tech insiders. If you liked the podcast, remember to rate it and share it with others in the tech community. For those who want even more Ed Tech Insider, subscribe to the free ed tech insiders newsletter on substack. This episode of Ed Tech insiders is sponsored by magic at Tech. Magic Ed Tech has helped the world's top educational publishers and ad tech companies build learning products and platforms that millions of learners and teachers use every
day. Chances are that you're probably using a learning product that they've helped design or build. Companies like Pearson McGraw Hill, imagine learning and the American Museum of Natural History have used their help to design or build some of their learning products. Now magic wants to bring its pedagogical and engineering expertise to make your key learning products accessible, sticky and well adopted. Check them out at Magic Ed tech.com,
which is Ma GIC. Idi Te ch.com and when you get in touch tell them Ed Tech Insider sent you