Week in Edtech 10/30/23: Biden's Upcoming AI Executive Order, Edtech Events, State of AI Report plus Special Guest Erin Mote of InnovateEDU - podcast episode cover

Week in Edtech 10/30/23: Biden's Upcoming AI Executive Order, Edtech Events, State of AI Report plus Special Guest Erin Mote of InnovateEDU

Nov 04, 20231 hr 18 min
--:--
--:--
Listen in podcast apps:

Episode description

Transcript

Alexander Sarlin

Welcome to Season Seven of Edtech Insiders, the show where we cover the education technology industry in depth every week and speak to thought leaders, founders, investors, and operators in the edtech field. I'm Alex Sarlin.

Ben Kornell

And I'm Ben Kornell. And we're both ad tech leaders with experience ranging from startups all the way to big tech. We're passionate about connecting you with what's happening in edtech around the globe.

Alexander Sarlin

Thanks for listening. And if you liked the podcast, please subscribe and leave

Ben Kornell

us a review. For our newsletter events and resources. Go to edtechinsiders.org. Here's the show. Hi, everyone. It's another edition of the weekend Ed Tech with Ed Tech insiders hosts Alex Arlen, I'm Ben Cornell. Great to see you again. Alex, what a week it was the conference was great last week, so much exciting stuff to talk about. Good to see here today.

Alexander Sarlin

It's great to be back on here. We have been off for a little while. And I'm so pumped to get back into the mix and catch up on all of these amazing things that have been happening in the world of edtech. In every big tech higher ed K 12. Certainly AI so I'm looking forward to jumping right in.

Ben Kornell

Before we jump into the bigger world. Let's dive into the world of Ed Tech insiders. What's coming up on the pod? And also we should probably give our listeners a recap of what we've been up to the last two weeks.

Alexander Sarlin

Sure. So we have a bunch of really cool episodes coming up next week. We're talking to honor lock, and they do AI based proctoring and have really we talked to them briefly a couple of years ago, and now they're really evolving and trying to think about what the world looks like in this totally changed AI world. So that's AI and integrity, but sort of through computer vision.

It's really interesting. And then we've talked to Anna Roopa Ganguly from prisms, VR next week who does just amazing stuff with VR AR for science, and math. And then Kenzie Patera Davis who won the pitch competition, last ASU GSV for her really interesting mental health company Morrow, so bunch of great interviews and continuing to just find fascinating people throughout the entire field.

Ben Kornell

Yeah, we've got so much great stuff coming up on the pod. And then we had 50 of our favorite founder friends and another 700 or so favorite ed tech insiders, community members join us last week, tell our audience a little bit about the conference.

Alexander Sarlin

So you got to go through the founders forum because I was not there. That sounds so cool. But we did our first ed tech insiders virtual conference last week. And we really heard a bit more than than we might have been able to chew. And then I think we pulled it up. It was an 11 hour conference in one day, where we had 44 different speakers 40 of which were CEOs of edtech AI companies. It was all about AI

and education. And we hit it from every angle we could possibly think of we had a superintendent, we had professors, we had CEOs, we had investor panel with owl and reach. And you know, Caitlin Donnelly, one of our favorite Investor guests, and we just went all day, we had hundreds of people coming in throughout the day, and got great feedback. So I hope that this is something we can do on a regular basis. It was so amazing to bring everybody together. What did you

think of the conference? I mean,

Ben Kornell

it was really amazing to hear the debate the dialogue, you would think that it would be like a hype session, and it would attract people who are all in on AI. And, you know, we just had a diversity of views. And we had people dialing in from all over the world talking about their experiences. And speaking to the issues that are hyperlocal, as well as these issues that really span not just education, but AI tech, or world

ethics in general. So from every single layer, it really was stimulating and I've been to a couple of virtual seminars before and it's easy to kind of tune out this one was just kept pulling you in and so many great insights from people. So I'm going to be digesting this one for a while. Meanwhile, the week before we actually had a more intimate event with 30, edtech and AI founders, it was called the Ed Tech AI founders forum

catchy name. And we really tried to dive deep into what are the main issues, challenges and opportunities for founders. It was workshop. It was, you know, founder led, it was really great. I had some great guests too. And you know, one of my big takeaways was that AI ed tech problems are often at tech problems. You know, there's many of the same challenges that we see with every other product. Ai

is no different. You've got to really figure out what the problem is you're going to solve isn't an aspirin or is it a vitamin? How do you scale it? How do you find the buyers? How do you ensure that learners are getting the adaptive tools that they need so many great themes. But whether you were at that intimate event, the happy hour that followed after, or on our virtual conference, it's just so great to have the EdTech, Insider's community connected and collaborating and everything.

Alexander Sarlin

It's so cool to have this sort of these amazing local events, I know you had over 100, maybe 150 People at in San Francisco, people love to connect and see each other and then original event, we have people from 35 different countries. So what's been so cool is this idea of, we can, if you're local, and you're in a tech hub, and you want to connect with other people really think about this, we want to help you. And I think people

really are hungry for that. But it's also there's a chance to go global because this is happening all over the world at the same time. It's just an amazing moment for education and edtech. And everybody's struggling to figure it out. So it's just so important to bring everybody together. So we're not all figuring it out in these tiny silos and reinventing the wheel over and over, there's a lot happening. And I think that's a great segue to our first

headline. So by the time you're hearing this, the executive order will have already come out, we may do a little bit of a Live episode or, you know, to cover it. But there's humongous AI executive order from President Biden in the US coming Monday, October 30. And it is

going to be a biggie. A draft order was put out last week, and it just covers a huge swath of what could happen in the AI landscape, and specifically has some really interesting pieces about education as well as healthcare and immigration and agencies. Have you seen some of the I know, I know, we haven't actually heard it yet. But from the draft, what it stands out to you from this big executive order that's coming down the pipe?

Ben Kornell

Well, I think what we're finding is two things, one, that there are already many laws that are relevant to AI that are already in place, and the Biden administration is going to be leaning into enforcing those laws, laws, like data privacy. And while data privacy might seem like, okay, that's not a big deal, when you actually look at the mechanics of how atoms work and you look at child data, and how it might be used in products, and the data flows and the agreements.

And all of these things, it could be really, really challenging for big tech players like open AI, Google, etc, to allow users under 18, or even under 13, to access even through third parties there LLM. So that will have a profound impact on ad tech in general. And then the second is, what we anticipate is they're creating a commission, that's going to be looking at

new AI regulations. And those things cover everything from copyright violations and content, derivative content, all the way to what kind of review systems are required, what kind of transparency is required. And so I think the industry writ large is looking more apt that one, I will say for ed tech folks, there's probably less danger in that second group, and maybe even an opportunity for more defensibility in terms of people kind of ring fencing,

what you can and can't do. But I think, you know, we've talked about this kind of, we've been in this gray zone here where we no longer and ban it, but we can no longer with certainty say, Okay, this is good for the learner, kids are data privacy, I think the Biden administration is hoping to take a meaningful

step in that direction. And just so you know, who's been involved, they've had CEOs from the largest tech companies, but they've also been having this AI insights working group that I've seen some of the reports out from. So I feel a little bit more confident about this one than some of the other kind of blanket executive orders that we've seen out of Department of Ed, because you can tell people in industry are really engaging and thinking, okay, practically,

how can you make this work? I don't know, what are you anticipating? What are you looking forward to?

Alexander Sarlin

So we've been talking on this show for a while about how AI is this incredible Wild West? And it's in some ways, that's great, because it allows a lot of freedom and people to experiment. And I think that's been really cool. And we're obviously very bullish

on it. At the same time, it's been a little surprising to me that Europe and China and you know, just you've seen other regions regulate and sort of start to get ahead of this much more than we had the US and I think this is a major catch up move where we've heard these recommendations from the Biden administration about how to use AI in the cloud. aspirin, for

example. But this order is expected to sort of mandate the creation of a toolkit from the Department of Ed about how to implement those recommendations. And the how is kind of the whole thing in AI, right? I mean, it's one thing to have these high level principles, they certainly matter. And we're going to be talking to Aaron Mote about the FIA principles that are just coming out now from the software

industry association. But the actual implementation of these complicated things like as you say, privacy and security, and How can kids data be used that can be used to train a model? Can you remember something about a kid to personalized for them? What does it mean to have a non biased system in a classroom? Like, what does that actually look like? If you know that the underlying models contain bias,

and they're open about it? How do you possibly have something that's based on them that is not bias, the how is so complicated, that I think that the focus here on people trying to actually find where the rubber meets the road is healthy, but also scary, because there's no guarantee that these things are going to

fall on the right side. And one of the things that stood out to me is this concept of there's a mandate for big tech companies to submit reports to the federal government about how they train and test their foundational models. And those are the foundational models we've all heard of, that's GPT, that's palm two, that's llama, the big ones made by the big tech companies, that's a big deal, because they're all in competition with each other

anthropic squad. And if they're telling the federal government the details of what they're doing, that's a little bit like, you know, asking Coke and Pepsi to give your special formulas to the government. And I think that might be the thing that gets the

most headlines. And it's not about education, but it is indirectly about education, because so many of our education products are built on top of these models, I think some of these companies are going to throw a little bit of a fit about that one, maybe a bigger fit. But that said, I agree with you, I'm sure that they have been talking to everybody in the field, we know we've seen the Googles, and Deep Mind CEOs and all of these folks coming to the

White House to talk to them. So it's going to be a negotiation, I'm excited that regulation is coming, I don't think it's actually good for the field for it to be totally unregulated. Because when things happen, and they will, this sort of there

could be a backlash. At the same time, this gonna get a little messy, because the idea of you know, so this draft is going to tell the Department of Ed to develop resources that address safe, responsible and non discriminatory uses of AI and education within a year, meaning by the election in 2024, safe, responsible and non discriminatory, none of those have clear definitions, they're very, very broad terms, it's just going to be fascinating.

And I hope it goes, you know, in a way that does not slow down, or really, you know, create huge speed bumps for all of these amazing AI education companies that are trying to get into the classroom.

Ben Kornell

Yeah, I feel like this is one of those where we're going to be digesting some of this for another year or two. But the age of like, total lack of regulation is over. Yep, with that. S i, a principals. And, you know, we have Aaron talking about that, you know, we have a sense of what it could look like. And then you also hear about some of the new regulations of things like crypto and other rights is

really taking hold. I think what it helps you understand is, the US government now is taking a stance where, you know, with Facebook and social media and the Internet, we kind of missed our opportunity to pass real regulation that protect consumers, but awesome help kept that business here. Kryptos, basically moving overseas, because it's being regulated, like terrorism financing. So

that's also been a lesson. And so I think with this one they're trying to get, you know, it's the Goldilocks thing where they're trying to get it just right, so that there's enough freedom and innovation, but also enough safeguards. And that's where in edtech, we mostly end up getting confined or protected by those safeguards.

Alexander Sarlin

Yeah. And this I think I said it wrong before the Software Information, Software and Information Industry Association principles that we'll be talking about later includes a steering committee, that is, you know, McGraw Hill is engaged in structure, Pearson, go guardian, you know, they have obviously, really working very closely with big industry players to try to find that balance of ideas that you know, a principles that we can all get behind that protect kids that protect against bias

that protect against false, you know, hallucinations and privacy breaches, but also, ideally, don't just sort of put this huge wet blanket on This like red hot field, we will all have to see if they find that right balance and ideally the Ed Tech insiders audience. I mean, we've been talking to so many people in this exact AI and education space, including some of these very companies. We have an interview with the head of instructors AI marketplace coming out really soon on the

podcast. I mean, we're right in the middle of it, and it's gonna be wild. I'm excited. I'm optimistic. I'm not anti regulation. I don't think they're going to completely destroy everything at the same time. Yeah, you're right, the age of the Wild West might come to a pretty rapid close pretty soon. Yeah,

Ben Kornell

anything else catch your eye on the AIB? Before we move on that you want to highlight?

Alexander Sarlin

Sure. So let's just do a quick lightning round here. I mean, the EDUCAUSE conference was over the last couple of weeks and AI dominated it, it was absolutely everywhere on almost every panel, the ASU just announced this AI, the calling of the air show the AI revolution show a whole other section of their ASU GSB conference in April all about AI revolution, that'll be really interesting to watch. And I'm sure we will be there to cover it in one way or another, you send me that state of AI report

that was unbelievable. That is like 160 Slide report covering everything in the world, we will put a link to that in our show notes. But if you haven't seen that, it's about the most comprehensive thing I've seen out there. And then some small things. I mean, one thing jumped out to me this AF T partnering with GPT. Zero, this is interesting. And I don't know if

this is gonna go anywhere. But basically, the American Federation of Teachers Union, partnered with GPT zero, which is one of those first GPT Identification Program, you know, generative AI integrity programs. This was the one that was created by that one Princeton student at first, and that's going to be a company.

And it's a, I mean, it's starting to be pretty well accepted, at least among everybody I've been following that they do not know, nobody knows how to actually see if something is GBT generated or not, it is really not something you can reliably track. So there's a little bit of skepticism about this. And, you know, I don't know even the idea of a union partnering with a integrity company. I don't exactly know what to make of that. Do you understand what

that headlines even about? Why would a union want to partner with a specific integrity company?

Ben Kornell

Well, I think the story is really about the unions are really starting to think that they need to go on offense. And this is one of a series of partnerships that are likely to come, you know, the politics of the moment are that unions are getting in the way of innovation, and new features and

new developments and so on. But actually, teachers unions love some of the developments that they're seeing in technology, and they worry about the safety, but they're also very excited about teacher efficiency and make your job more sustainable. So I think this is a move where unions realize that they need to be a part of the solution. They're starting to do partnerships in that regard. So this particular one I'm not so sure about, you know, it seems a little random, you know, AI

identification, okay. But I am hearing a lot more activity in California, in particular, around the teachers union, trying to find ways to be proactive in both AI safety, but also in AI capabilities for teachers.

Alexander Sarlin

Yeah, I mean, that makes tons of sense. I think, you know, AI could be we talked to dozens literally, or there were two separate panels at our conferences last week about tools for teachers that are designed, and every single CEO said to save people time that nothing is ever trying to replace a teacher, it's all about giving the teacher you know, time and ability and structure to do more of what they actually want to be doing and less of the grunt work and busy work and sort of painful

logistical work. And I mean, they're, you know, 15 times faster, in some cases, 40 times faster, some of the quotes that we got. So it makes sense, the efficiency stuff makes a lot of sense.

Ben Kornell

I just worry that if teachers unions go too far down the like, AI detector and integrity of Work Path. That's like so eight months ago, right? Exactly, yes, yeah. But I do think educators are wary that if they accuse somebody of cheating, that might blow up in their face, but then they might give assignments out and everybody might be cheating. And what we know or what we hear from, you know, our community is that the answer is stop making assignments that are easy to

cheat on with GPT. I want to rewind just a little bit. So my highlight on the rest of the AI you know, as I look at that, the number of things coming out it's just an amazing and I think the two main things that stand out for me as one multimodal is here is it's basically the convergence of vision of image generation and of text generation and speed. You basically got the full suite here. With Chad GBT continuing to jump ahead, I thought Gemini

would be here by now. So we really see open AI with a strong lead, but Google fast approaching, and then the state of AI report some really fascinating insights. One fascinating insight is really about this like race between open models and closed models. And what it shows first is the closed models, which would be Chet GPT, or GPT. For they outperform humans like period, on an average human on many average tasks, they have achieved some level of outperformance of the average

human being now, is it AGI? No, but we've reached a level of like writing performance or task performance, that now we've kind of created a new paradigm for what computing can do. And to these open models are just right behind these leading models, which leads you to wonder, Where is the industry going to head and the insights here have really some great cues on where we might be headed. One is that enterprise prefers closed models. And so enterprise players are far more likely to

build on open AI or Google. And that's where the money is. The open source ones are actually preferred by startups and early stage companies that are really just trying to experiment trying to keep their costs low. Yeah, so what we might end up having is a universe where people transition over time, depending on their customer base. And depending on, you know, the quality of the outputs and the needs of their business, from open models that kind of get them up and running to close

models, and then vice versa. So there's a lot of diversification. They also show that small, directly targeted models with a training set that is 50 times smaller than any of the big models can perform the same results, so long as the specific tasks it's being asked about are in that field or domain. So that also gives us a sense that there's going to be a third category, which will be specialized LLM. And I think that's a really important insight and development for

edtech. So imagine an edtech entrepreneurs journey could include both open LLM, closed LLM, and then specialized alums. As they kind of put together the concoction that makes great learning insights tools for educators, and university data analysis. There's so much interesting work in that report. But it actually helps us understand that we can't paint all arms with the same brush. And so I think if you get a chance to read it, it's great. There's also a summary that we

can pose. But I think these two pieces coming together of multimodal and like multiple types of models, gives us a sense that it's only going to be expansive in terms of generative AI and the capabilities.

Alexander Sarlin

amazing insights. Two super, super quick points on it just around the multimodal, I mean, Dolly has now been incorporated into chat, GBT four, which means that you can generate images, or have it analyzed images, or PDFs or all these different things. So some of the different functionalities are coming together in GBT. And it is really exciting and interesting. I saw this crazy social media post that was a video to video AI. And basically, it was like a dance.

It was a dance video, maybe people have seen this, but it's like, you know, basically, it was a person dancing. And then it would analyze that video and create a new video with the character. It's animated. It's like totally AI looking. I mean, on purpose, but it's doing the exact same dance to the like millimeter. And I was like, Oh, this is really what we mean by multimodal AI. It's like any format to any format, and can truly keep what you want to keep and adapt what you want to adapt

and change it. And just it's just the term meta creativity came up in our conference last week, I had never heard that phrase. I'm not sure if that person made it up or not. But really the idea of like, being creative about what creativity even looks like. And I really think we're entering that phase. And this multimodal is really creating it. It's wild. And then just to your really excellent point about the open versus closed versus specialized models. She's still like a super

rapid thought experiment here. I mean, the first experience most of us had with anything generative was Chad gbta. Anybody who wasn't really already in the field or something like that, like maybe Google Bard or being using GPT. And I think it sort of got us to think of these generalized models as like what this is. I think there's a very good argument to be made that the way this is going to actually go is people will make 1000s of different hyper specialized

models. And when you need to do something, it's not that you have to go find that specialized model, it's that you go to a central place and say, This is what I'm trying to accomplish. And it finds you the best model for it. Like while you're asking, right, and to me, that is arguably the actual future of AI. And it sounds really weird that way. But it actually makes

a lot of sense. Because rather than asking, you know, the world brain about something and then having to pay for the API, and, you know, having it sort of be open in all these ways, there's a lot to be said for, like, find the specialized version, which can be trained with much less data, which can be much cheaper. Maybe it's fully open sourced, in as many cases as you can do. I know I say this a lot on this

show. But I'm like, I'm still confused as to why I'm not hearing more often that model, like people think it's going to be all in one application, I don't think it is, I think it's going to be, you know, one traffic cop application that sends any kind of requests that you want, and figures out through AI, of course, where it should be asking and then gets the cheapest, fastest, most efficacious version. And to your point about an enterprise being

closed. I think even in enterprise, that's going to be true, because you'll have closed and open versions of these like, I don't know, it's an amazing moment. And it's easy to predict and be wrong. But I'm just really curious about, you know, this sort of de centralization of these generalizable models, I don't think that the future is just one thing that answers everything anymore. And I think maybe I did when this first came out, but it doesn't really make sense to me that that's how it's

going to work. Why should one model rule everything when you can specialize models on incredibly specific task? And that's the one you want to actually do it. I mean, we saw blueprint prep this month introduced the first AI powered MCAT tutor. Now, when you're trying to do an MCAT tutorial, yeah, you can go to Chad GBT and say teach me how to ace the MCAT medical exam. But don't you want to go to a specialized model that knows exactly that? And is trained exactly on all the

previous MCAT? And does nothing but MCAT tutoring? Like I think you do. And that's really interesting. So I know, that was supposed to be two short things. But that's where I'm at with this stuff.

Ben Kornell

Yeah, I think it's fascinating. And it does speak to some of the assumptions we had at the beginning of this journey with generative AI, which was the largest models are going to be the winners. And it's a race to have the largest model. And now we're realizing that those models have their purpose. But there's so much diversity in what you can do at both the LLM layer, but then also on the application layer. And so much of this now is being driven by where's the defensibility at the application

layer. And at the application layer, there's this real opportunity to diversify your sources. It feels like we're going back in time, actually a little bit with our headlines because we're starting with kind of the AI revolution, and then we're going to go next we're going to talk a little bit about K 12 education and go to web 2.0. So for those of you following along at home, web 1.0 was the internet web 2.0 was social, web 3.0 Crypto, AR VR, not really sure how that's

going. And we're in Web API probably at this point. But back to web 2.0. The launch of all the social media Titans, including Facebook, most notably, really ushered in a boom for tech. But now many, many states. In fact, 33 states have filed a lawsuit against Facebook for targeting teens knowingly with addictive social media content that both causes internet addiction, but also has had some really, really negative outcomes in terms of mental

health. This hubbub really started to intensify in 2021 when Facebook whistleblower Francis Halligan testified in the Senate, basically saying that Instagram and Facebook had data around teen mental health that showed not only did the algorithms pick up when somebody was depressed, but it also served him more social media that would accentuate their depression. This included things like body shaming and images, and then even sometimes serving kids content on how to kill

themselves. So it was a remarkable and dramatic testimony. And now it's going to the court system. A number of school districts in California sued not only Facebook, but also Google because of YouTube's addiction, and Tik Tok because of their addiction. That's a different lawsuit. But the California lawsuit, particularly about Facebook, and their kind of addictive algorithms has now

been picked up by 33 states. And you know, part of why we wanted to bring this up is it affects kids and teens and we care about that anti insiders, but part of it is also we're seeing big tech being really negatively impacted by some of their work with people under the age of 18. And I think it raises real questions around the appetite for a place like Google, the largest ad tech company in the world, will they continue to invest in things

like Google Classroom? Will they continue to invest in things like Chromebooks, and, you know, Gmail for students, YouTube kids, all of the ways in which you know, Google as a titan of the industry has really been reaching into the under 18 segment. And then you have Facebook, where really the majority of users of Facebook

are now older people. But if you look at Instagram, and you look at some of the newer features that they're always trying to roll out, it's about getting new, younger users, how will that affect those companies appetite for educational content? And so on? So I'm curious, you know, it's this is playing out in the court system, my school district, you know, I'm on the school board, we met with the attorneys, and they're like, do you want to join the lawsuit? So I mean, it is

really, it's happening. And it's possible that there could be a mega lawsuit here. Because the data shows that many executives in these tech companies were aware of the kind of addictive viewing habits and engagement by under eighteens. What's your take, as you hear some of this,

Alexander Sarlin

so I have a super specific take, which is if you combine this headline we're talking about here, it's 33. Attorneys General, as you mentioned, Ben plus an eight additional attorneys general with the different case, but very similar, and the state of Florida suing it in a separate case, we're talking about virtually the entire country, coming down on this one company.

And then you combine it with what we've been talking about earlier about the Biden administration about to pull the trigger on AI regulations that are very likely to really get up in the business of open AI and anthropic and Google and Microsoft, I mean, we have seen decades, I'm just going to zoom

all the way out here. We've seen decades of government being almost completely impotent in the face of regulating business, they have failed over and over to break up monopolies when it comes to any kind of big tech, Google and Microsoft tried many times, they have just been absolutely steamrolled by industry, and especially the tech industry, leading to the fang companies, right, leading to the biggest companies in the world, in the States and in the world, being all tech companies,

they've just government has been had no chance to do anything to them. Starting with 1996, you know, regulation that basically social media was not responsible for any of the content on it. If this suit against meta actually has anywhere near the kind of impact that you're saying it might. And if the government successfully keep the federal government successfully keeps the AI companies from, you know, lobbying or whatever they're going to do to try to stop

things coming down. I think we may be entering like a new age of government versus like, industry. I mean, that's my big take on this is like we obviously ad tech people and technology industry is amazing. But I remember being working in Silicon Valley, in the mid

2010s. And just feeling like people thought that Elon Musk, and Zuckerberg and Jeff Bezos were like these gods that were just the Masters of the Universe, and Reed Hoffman, who they were just like, were the smartest people made everybody look foolish could just, you know, just make anything happen.

And I never liked that. And even as somebody who I worked with CCIE, I was right in the middle of this, it seemed very strange that the US government was just so stepping on a rake over and over again, when they even began to try to do anything against these companies. It just felt ridiculous. I mean, look, this might be another step in in the same direction. It might be that Mehta gets 10,000 Amazing lawyers with their infinite funds and shuts this whole thing down. But I don't think it's a

good thing. If that happens, I think it's time for the government to protect people against some of these tech companies. And if these two things stand, I think we may be sort of entering a little bit of an the end of the tech giant era where everybody thinks that you know, where these handful of dudes for the most part, sort of define what our world looks like. It's never been a very good world. I'm kind of excited about it. Is that weird? Well,

Ben Kornell

I think the tide has definitely turned on that hero narrative. Exactly. I think the other thing, though, that I wonder about is, in many of these companies, there are a bunch of people working with really positive ambitions for the technology. And I think what we learned from web 2.0 was that great people could create ate things that had negative externalities. And I start reliving that with AI right now. And so I don't

Alexander Sarlin

think it's even begun with AI. I mean, oh, my God,

Ben Kornell

I think we're preparing for the storm that's coming up. But you know, this is one of those where I think the theory before was, you know, if you bring a bunch of entrepreneurial people who have good culture, good values, good intentions, good things will happen. And that was just not the way that it worked. And some of that has to do with the fact that machines are algorithms are making decisions for the human beings. And that's where things got out of control in relation to this lawsuit.

Alexander Sarlin

Well, they're also for profit companies. I think that's an important part of this. I mean, you know, meta, I don't have any doubt that Zuckerberg himself has never wanted a single teenager to be depressed or to, you know, these companies are not founded on that idea. But when you get a company that big, and you have huge teams, doing algorithms, huge teams doing advertising, huge teams, doing news, like it gets out of control. I mean, you just were talking about the

scale of these things. I mean, I've talked to people who have been in the world of trying to, you know, regulate Facebook's content, like the content moderators for Facebook, that shit is dark. I mean, you might have the best of intentions. But when you're talking about systems that are billions of people big, nobody can control them. And I think that's what governments are for. Right? Governments are right to make sense of systems that big.

Ben Kornell

I think this is one of those where I'm a little skeptical of our government's ability to mean the fact that we just got a speaker of the house after a couple of weeks of non governance. And basically, the world is in shambles. I think, you know, it's hard to imagine that they're going to figure out how to effectively get social media back on board, web 2.0. And then furthermore, AI. And when you're sitting in the chair of a school board meeting, you're thinking, worst case

scenario, what could happen. And there's a lot of worst case scenarios with social media and with AI. And so I think this lawsuit is going to really define schools and the relationships with technology

going forward. And we've talked about a ban full stop ban of tech being not a tenable place for school does not have, but total open, not a tenable place, where on that spectrum, will they land, I think this lawsuit will be an important part of developing not just fine, or whatever the punitive part is, but also a series of principles that schools and districts could follow. So more to come on that. Let's take it over to higher ed, what do you got for us on the higher ed beat

Alexander Sarlin

you the story that jumped out to me this week from higher ed was that for the first time since basically, the beginning of the pandemic, undergraduate enrollment in the US is up. And we've been following this trend going down and talking about how you know, college presidents, you know, are worried and we have seen enrollment go up. It's kind of

confusing news, though. This is a preliminary report from the National Student Clearinghouse and even though total enrollment is up, freshman enrollment is down by a good amount. So it's not that there are more people coming into, you know, higher ed at the at the first year, there's a big increase in community college enrollment, there's a very big increase in historically black colleges and universities enrollment, a 6%

increase. And there's this crazy racial divide where white students are declining across the board, and especially among freshmen, whereas Black, Latino, and Asian students are accounting for the most of both the undergraduate and graduate growth. So this is Wacky News. I think it's wacky I didn't expect I didn't expect almost any of this. I certainly didn't expect undergraduate enrollment to grow. If it did grow, why would

it not be freshmen? But I guess it's just people are coming back in or I mean, there's some reason for it. And then this racial divide is fascinating. What do you make of it? Ben?

Ben Kornell

I think the key stat for me is that nearly 60% of this growth is occurring in community colleges. And so I will say that the overall interest in higher education still very strong. I think the let's not mistake this growth in enrollment for a vindication that there's an ROI in the four

year college model. I don't think that that tide has turned the overall impression right now is that most four year colleges, especially private college, tuition, is not worth the price of admission because of high debt, high interest levels, and, you know, like questionable job prospects, but what you do see is These large scale players, which might be ASU or Western Governors, or it might be California community colleges, they are enrolling consistently.

Lots and lots of students. And these are students that are coming from less represented backgrounds that understand that the kind of access to their next level jobs is really going to require ongoing higher ed courses. So I think, you know, we may be actually in a weird way we may be rewinding the clock to the way it was actually in the 50s and 60s, when a lot of students went to a local community college for two years to get an Associate's degree.

And then portion of those students leveled up to get a bachelor's at their local university, rather than doing the whole four year Country Club thing. So I think this is great, great news overall and the decline in white students, I wonder how that corresponds with just overall population

dynamics. But the decline of white students also suggests that we do have a large portion of white electorate people who are probably maybe this is a reach, but I imagined the kind of Trump demographic, engaging less and less with higher ed, as we've reported previously, there's a view that higher ed is indoctrination into liberalism. And there's also in red states, there's way more skepticism of higher ed than in blue states.

So interesting to kind of put some of those trends together, it might be a little bit of a leap here. But I think it's something for us to watch,

Alexander Sarlin

I bet it's a contributing factor. Because you we know that the there's just a major difference between political parties view of whether college is worth it, and what it's for and what it does. And we also see just to bolster your point even further, for your institutions with lower acceptance rates, meaning selective, including elite, but selective colleges of any kind, had the most pronounced declines. So that is really, I

think, right? Exactly in line with your analysis of this that yes, people may be turning to some extent back to higher education as a means of mobility. But they're not necessarily turning back to the super superduper selective schools, which are also the ones that have increased, continued to sort of have spiraling, spiraling, spiraling tuition and just not been able to get out of

that, that spiral. So including, you know, public schools that are in that same group selected public schools, but I agree with you, I think it's a really, really good analysis that, yes, undergraduate admission may be going back up a little bit compared to where it was, but it's not going back up in the direction that it was before the pandemic. And for the last few years, there really does seem to be a reversal of that sort of college for all mentality. It's fascinating.

Ben Kornell

Well, with that, we're gonna go to our popery category, a new category we're introducing to the show, I'll take popery for 300. Alex, there's been a bunch of just like insightful and incredible news popping up all over the board. I'll take a couple that I think really popped out for me first in the EdTech. area goes student there's an expose a around, basically a company valued at $3 billion, partying hard, laying off hundreds of staffers and kind of the kind of

classic rise and fall. And it's really disconcerting, you know, the article, hard to know exactly how much of this is true today versus true six months or even a year ago. And it's from Business Insider. So like, you know, some click Beatty stuff here. But I think we can all be pretty sure in saying that goes student is no longer valued at 3 billion, and their moves to make some acquisitions of physical centers, their moves to pull

back from the United States. And some of their work in AR VR really suggests that they're trying hard to find their next vehicle of growth. But it is it kind of goes with the paper story that we had a month ago. And the big story that we've been having over the course of the year and speaking of by Jews, former ed tech star byju, raveendran has dropped off the list of India's 100 richest

people. I think this is one of those where we're going to have to now watch and see how buy juice sells off pieces of the company and see what's left and see how much those assets are really worth. That's what I'm watching in the EdTech space. And then just more general, you know, you sent this to me, Alex slack is taking a week off for upskilling. So slack was acquired by Salesforce, and they're shutting basically the entire company is like not doing any work and They're all

focusing on upskilling. And I thought it was a really, really entertaining article, in part because it looks like you know, a bunch of people had waited until the last minute to do their compliance training. And they're like, Oh, crap, we're not going to finish it. And we just got bought by this company, we should really do their training. But then when you actually look and see what the courses are, that they're being trained on through Salesforce is proprietary learning platform.

It's pretty cool. It's really about, you know, learning about the history of technology, it's learning about interpersonal skills. There's a way in which I think this is a conscious effort for Salesforce to build cohesion upskilling culture across the entire Salesforce enterprise. And now they're bringing slack

into it. And it just felt like one of those moments where you're like, hey, here's a headline that just shows how darn important, you know upskilling can be and so I'm sure there's some ad tech players who are out there who are around and helping Salesforce build their learning platform. But that was a really fun headline. What was on your radar in the popery category,

Alexander Sarlin

I also love that Salesforce slack article. And Salesforce is a very serious company about corporate training and lnd. It's interesting, they built a lot of stuff around that. And I hope that that becomes a trend, this idea of companies, instead of trying to, you know, do learning in the stream of work, which has been this catchphrase for years in l&d, they'll actually say, You know what, that doesn't really work. Let's actually give people

time to train and learn. And we know things are moving so fast. Let's give people a, you know, like they did a week they this will happen at the beginning of October that give them a week to step away and actually learn the things they need to learn to do their job better. I hope that becomes a trend. So one thing that stood out to me we've talked a multiverse multiple

times on this show. They are it's a really interesting company started by you and Blair, the son of Tony Blair, in the UK, they don't love to talk about that. But it is makes all the headlines because it's

really interesting. And Multiverse has had a very explicit approach to basically saying we are doing apprenticeships in lieu of the need for higher ed, it's sort of you know, we've talked to Ryan Craig, apprenticeship nation, the idea of giving people direct work experience can actually be a meaningful substitute for higher ed, that's been their sort of explicit stance this week, there was an announcement that they cut a few dozen jobs.

And part of why they did that is that they're sort of shifting models, at least that's what it seems like from the outside, they're shifting models from this sort of apprenticeship model trying to replace higher ed or sort of be, you know, really an alternative pathway to more of a core upskilling business, a sort of enterprise business.

Ben Kornell

It's a huge pivot.

Alexander Sarlin

It's a huge pivot, and it's a pivot into a very crowded space, but away from something that felt very bold,

Ben Kornell

and they were valued at 1,000,000,002. So that's another unicorn in edtech. Pivoting

Alexander Sarlin

exactly and pivoting an interesting way. I mean, you compare that to the last headline where like, freshmen enrollment continues to go down, people are still doubting the ROI of college, you'd think that multiverse would be in the catbird seat for that. But the obviously that's not how it's been playing out entirely. So interesting news, and you know, maybe we'll get a chance to talk to somebody from multiverse about this pivot, because it's super interesting, a couple of weeks ago was the

Duolingo conference. I don't know if there's a ton of like big news from that. But it is interesting. Duolingo continues to be sort of one of the very biggest pacemaker trendsetters, it's just incredibly successful, a tech company in sort of any way you slice it. And they are, you know, not only moving into math, which we've reported on the show, they're also make doing a whole music section of the site, which is interesting.

And they're doing immersive English for advanced learners, as well as sort of trying to build more and more AI based features. They're adding more games, you know, in addition to their activities, commenting, and this sort of radio feature, which is kind of neat, where you listen to these, like sort of podcasts, like shows with Duolingo characters, and then answer sort of like, you know, immersive listening practice.

So, you know, I'm not sure any of these are groundbreaking, but they're certainly interesting. And we always have to follow what Duolingo is doing. Because, you know, everybody wants to be the Duolingo of x. So Duolingo, you need to follow what they're doing. And then the last thing that jumped out to me, it's a little bit of a wacky one. But speaking of sort of big tech snap, which is a company we very

rarely talking about. It's sort of a social media company that has gotten a little bit lapped in the last few years out of LA, he's doing something kind of interesting with that tech, they're partnering with an ad tech company called in spirit to basically create augmented reality stem lessons and bring augmented reality curriculum

into schools. You know, over the next year, they're starting relatively small it'd be used at least 50 schools and you know, this may be just a play to sort of try to you know, pick up where Oculus left off or you know, sort of look for a new market for their what is considered not super We're successful AR technology. But it's also just kind of interesting. I don't know, I want to keep an eye on it.

Because as this sort of AR VR landscape continues to sort of restructure, post, you know, meta sort of trying to run the gamut on it, maybe there is an interesting opportunity for companies like snap, to jump in and make some kind of interesting experiences in talking to the prisms, VR person, VR CEO on Aruba, gangly, she's a real believer, and it's very inspiring to hear her talk about the power of AR VR in classrooms, as well as chi Frazier, who we talked to who does amazing stuff in that

space, with her company chi XR, but it's just, you know, it's neat to see a big tech company stepping into Ed Tech in a way that is just very explicit. It's not this sort of secret Google Microsoft strategy where it sort of happens on the side, and then you suddenly realize they've changed everything. They're saying, Hey, we're going to try something and education. And we'll see what happens.

Ben Kornell

One of the things have been happening in the last few weeks is just taking stock of where ed tech is today, and where it's going going forward. And I think a number of people have shown now these kind of investment trends and growth trends, that kind of connect the kind of pre COVID to where we are today, which just shows COVID, as this like blip in valuations and a blip because of

free money and all of that. But essentially, the growth of the ed tech sector and the growth of valuations and investments, and so on so forth, are really, you know, at pace with what that trendline would have been had you kind of drawn from 2018 19. On to Now, the other big one, though, that people are talking about is just the number of exits and the number of

unicorns. And I think, you know, if we were to look at the list of people who most recently raised over a billion valuation, many of those, maybe even half of them would no longer be valued at a billion. And then you also look at the publicly traded companies whose stocks continue to be down. Even though last month, we reported that there was some rebounding for sure. And there's this thing, essentially called the IPO

window. And so most investors right now are talking about, okay, the IPO window is coming open, new companies are able to go public now. And we're starting to see the first people test that market and, you know, with mixed results, but not horribly bad, more like what we would have seen in 2018 2019. But I think the overarching feeling among ed tech VCs is that that window is not really open for ed tech at this moment.

And that's partly a result of what's going on with buy juice, which was really the kind of next one up. But it's also a result of an overall kind of cooling around defensibility long term tam long term opportunities in ad tech for

scale players. And so I will say, even though there's been a fury of AI investments in ad tech, and the rest of the space has been relatively quiet, we're at a point where I think the next quarter could even be a quiet one for the AI companies, while the ad tech investment sector kind of sees how things will play out, you know, if you're sitting at an ad tech investment firm, you've got to think, Okay, I've raised this much money, I have this much money in my war chest, how much

do I want to deploy now versus how much do I want to have six months from now or a year from now to deploy? And sure, you know, any one opportunity? I might put a bet on that. But writ large, I think there is a little bit of a tendency right now, at this moment to take a step back. It's been a year almost of generative AI, what have we seen? What have we learned? And where's our space going?

Alexander Sarlin

I agree. I think that's really good analysis, this executive order and what it says for AI and how it's sort of interpreted whether it's interpreted as regulation that will slow down, you know, the foundational models development, or what effect it's going to have on the AI space, I think will also probably have some effect, probably, I would imagine also a slowdown effect

on the AI investment as well. I just wanted to throw that in because this, it might be hit from two different angles in that.

Ben Kornell

Yeah. And by the way, some things may not play out the way you would think they would play out. So what might be bad for that lens might be good for edtech. Or they might be aligned. It's really, you know, it depends on the specific area. But if you're looking for more guidance, my guess is by Wednesday or Thursday, we're going to have more questions than answers as a result of the Biden executive order and report and guidance. But man, is it going to be an interesting week

in edtech. And as it unfolds, you're going to hear about it here at ed tech insiders. Thanks so much everyone for joining us. We're going to take you to our interview. Sit back and enjoy. Alright everyone, we Have our guests for the pod today and it is the one the only Erin Mote. Some people know her from Brooklyn lab school. Some people know her from innovate. Edu others project unicorn. She is a true mover and shaker but also unifier uniter and advocate for

all things, Ed Tech. It's such an honor to have you on the pod today. Aaron milk.

Erin Mote

Hi, thanks. I'm so excited to be here. Really excited to talk about hopefully all things AI, all things tech. Let's go.

Ben Kornell

Let's do it. Before we go too much into the future forward, can you just tell our listeners a little bit about how you got started, and how you built innovate edu and your constellation of projects and initiatives that are around interoperability and data privacy and student success? Yeah, so

Erin Mote

I'm an enterprise architect by training, which I think is really important, because it means I can only think in systems and that's something that dominates the ethos of innovate edu is how to think in ecosystems and how to move education to be a learning

system. And so when I started innovate edu, we were an organization attached to a charter school I founded in Brooklyn, New York with my husband, Brooklyn lab, and we got some really good advice in a windowless conference room in San Francisco, which was given the current regulatory environment in New York, don't try to do the innovation work at the charter school, create a

separate nonprofit. And so at the beginning, innovate edu was really focused on how do we scale ideas, technology, models of innovation from Brooklyn lab, whether that was our urban fellowship, or one of our tech products, to the wider universe to other charters to public schools. And to really do that through not for profit and not try to do it with the charter school. Fast forward, innovate. Edu is going to be 10 years old

next year, which is crazy. And the evolution of the organization really is from that sort of scaling agent, to now and to an organization that's deeply focused on building together uncommon alliances across the sector. And so I still think scale is really, really important. And I think now the way to do scale is to bring folks together to work on a common mission across different topics. And I think we think radical disruption in our

space is really important. And so at innovating view, I always talk about the work we're doing as a house of brands, not a branded house, because not everybody agrees on everything that we work on. Some folks can engage with us on students with disabilities, some folks can engage with us on data interoperability, some folks can engage with us on our new AI alliances. And so but a common factor is that we build trust, and we have folks row together in common mission and common

purpose. And if folks want to read more about that I actually wrote a manifesto, a good manifesto, like a happy manifesto, about finding common ground, and it's on our website.

Alexander Sarlin

manifesto is just get a bad name these days. Right, so you mentioned in there, this AI Alliance, and that is the cutting edge news that we're bringing, to all our listeners today. Just this week, you and a whole group of other ed tech, big companies, a whole Alliance of different groups put together a set of principles for the future of AI in education in collaboration with the Software and Information Industry

Association. Tell us a little bit about how these came together and what the principles are.

Erin Mote

Yeah. So I'll talk a little bit about the wider Alliance, and then I'll talk about the collaboration with SI. So a couple years ago, we helped be part of the organizations alongside NextEra, and a number of other companies, including Renaissance Learning and Carnegie that founded the ED

safe AI Alliance. When we did that, three years ago, chat GPT was just a twinkle in our eye, but we knew that AI and education was gonna come barreling towards us in the sector really needed to be ready to take on a whole set of questions about how do we think about AI as a transformative technology. I talked to regulators, policymakers, entrepreneurs, Superintendent state chiefs, like AI is like

the internet. So just like 30 years ago, you couldn't imagine that you would be able to like order groceries and order a car and do your banking and write something on a phone in your pocket and that we all be carrying the internet around with us. That's what AI is going to do to our space, and we can't miss the opportunity. So I don't think we did a really great job around thinking around digital use digital access and closing

the digital divide. We have an opportunity in AI to say, let's put our most marginalized learners at the center. Let's learn some lessons from some of the mistakes we made in that tech and privacy space where we now have this patchwork of laws that's difficult to nab. Again, let's learn some lessons about how we make sure there's, you know, not the haves and have nots around AI and AI use and education, and what are the things that we can actually do

together. And that's what ensafe is going to be really focused around and continue to be focused around. So all oriented around the framework, the safe framework, that might sound familiar, because it's also what Senator Schumer is architecting, his innovation framework around. So safety, accountability, fairness, and transparency and

efficacy. And that same framework is really a cornerstone of what's happening in Europe of what's happening here in the US from a policy perspective and what's happening in many other countries. So I'm really excited about our ability to carry that framework forward at the global level, at the federal level, at the state level, and at the district level. And we announced some policy labs, starting up our first one with New York City just a couple of weeks ago, doing a policy lab with an open

science approach. As part of that work, we're bringing together a steering committee of organizations sia is one of those and we've been working with them at Innovate edu, and add safe on these principles for developers and for industry. You know, I think this is the lesson we learned in Project unicorn, which is our large data interoperability initiative, you have to work on both the supply

side and the demand side. And so how do we think about guidelines guardrails rules of the road table stakes, whatever you want to call them for districts and for states and for users? But also how do you give some guidance to developers, and that's what the principles are really about.

Alexander Sarlin

And this steering committee includes ClassDojo, Cengage Desire to Learn go guardian, EdWeb, dotnet Instructure Pearson, you know, a lot of big big names McGraw Hill, big, big names in the education publishing and edtech. Industry, within sia in the steering committee. So you've been working on these principles. Tell us a little bit about you know, they're just now

becoming public. And I know you're thinking about how they can be used to guide policy in the future, give us a little bit of an overview of what the policies are and what you hope they're going to do for the field?

Erin Mote

Yeah, I think one of the things that I would say about the work is like, these are seven principles, right? These are things that like, as you're sitting down and thinking about AI use in your tools, you should be thinking about, Do I have an answer? Or my thinking about these seven principles?

I'm really excited that in the work with SIA, if you look at the principles, the number one principle is about putting learners educators, families and communities at the center, it sort of leads off, you might not think that's what a Technology Industry Association puts as the number one principle, but the fact that user is at the center of the questions around the use of AI is, I think, a really

important part of that work. And so I would just urge folks as they read the seven principles, and I'll just say them right now, AI technologies in education should address the needs of learners, educators and families, and AI technologies in education should account for educational equity, inclusion and civil rights as key elements of Successful Learning Environments. AI technology must protect student privacy and

data. AI technologies used in education should strive for transparency to enable the school community to effectively understand and engage with AI tools, companies building AI tools for education should engage with education, institutions, and stakeholders to explain and demystify the opportunities and risks of AI technologies. That education technology and industry should work with the greater education community to identify ways to support AI, literacy students, and educators, for students and

educators. What that means is like, when you're developing and using AI, you need to be thinking about privacy, you need to be thinking about transparency, you need to be thinking about how you're communicating this to your users, you need to be thinking about the work that you're doing as being done in part with the schools, districts, states, and end users that you support, not as sort of like a sticker you put on a product and say AI inside, you really have to help

me understand how the technology is being used and what the technology is being used for. I think that's where you're gonna see policy going is, you know, explainability, transparency, those are going to be I think, the cornerstones of where regulation is going along with student data, privacy and

security. And so, you know, I think that's where if I was a developer, I would be paying a lot of attention to how I explain the use of my technology, how I'm transparent about when I'm using AI, how I'm staying in compliance with existing federal, state, and even sometimes local privacy laws. And then of course,

cybersecurity. We want to make sure that folks who are using these tools are doing it in a way that they're doing in a secure environment and not increasing the threat and opportunity for a cyber attack in their district.

Ben Kornell

Aaron, can you talk a little bit about the ethical components that you considered as part of this statement, especially given that many of the entrepreneurs that are building AI for education are building on top of large language models that are constantly evolving, prone to bias misinformation, issues with hallucinations? What is the responsibility of a tech entrepreneurs? For the end product is a core component of it that well, M is not under their control. How do you think about that?

Erin Mote

Yeah. So I think we got to first name that we know the existing ello labs out there have bias. I mean, I think there's some great folks out there who are leading the charge, and showing us that from explainability, to visualizations, whether you're using mid journey, or whether you're using chat, GPT, all of those tools have bias in them. And so we need to be really attentive to what's coming out of those tools shouldn't be just

used as face value. And we shouldn't be keeping a human in the loop to really interrogate the output. If I was a developer, and I was developing a tool that was using AI, I would go back to my good old tool tips, right? Like build some tooltips in there folks that help people understand, I know, that's very, like, in the weeds bad. But, you know, helping people understand like, here's the source of this, this

could be a hallucination. You know, putting that forward, as you're taking someone in the screen for the first time disclosed, that you're using an LLM. What LLM are you using? Where could they find out more? What's the sort of accessibility inclusivity privacy policy of that host system? And helping you know, folks think about even

some tooltips? around like, how do you do prompt engineering, I think we have a lot of, and then creating a space, if you're a developer, and you're putting a product out there, create some spaces for educators to experiment and play with your

tools in a safe environment. And then most of all, and this is what we need to be in such partnership with together on the supply side and the demand side, we need to meet making sure that our educators, our students understand that if they put something in to a large language model, it is not private, it is not secure. And we cannot be putting personally identifiable information into those tools.

And we're not yet there. There are like small language models and things you can do to actually create more private environments. And I'm excited to see where education goes with that. But we just sort of have to help people understand the table stakes of the now. And right now, we're not at a place where folks should be putting personally identifiable student information or any sort of personally identifiable information into any large language model or chap GPT, or

anything like that. It's just not safe. And I think that's our job all together to communicate that. So educators and young people understand that

Alexander Sarlin

we're just coming off our AI and education conference. And a couple of these principles are all really powerful. A couple of them really resonate with something we've been hearing from a lot of ad tech companies and AI developers. One is the address the actual needs of learners, educators, and families and communities. And the reason I say that is that when I first came out, everybody was just amazed at its capabilities and saying, What can we do with

this? Now it feels like, you know, it's coming to a point where it's really about what do people need? What actual needs can we address with this rather than what needs could we imagine people might have that we can address with this? And the other is this idea of engaging with educational institutions directly and not building in a

vacuum. I'd love to hear your talk a little bit more about that part of it when you talk about companies working with educators and educational institutions in a safe environment. To help us understand a little bit about what that might look like, what is a safe environment? When it comes to AI? Does it mean without students in the room? Does it mean in a test environment with students who are warned and very carefully vetted? Does it mean avoiding PII or mental health issues?

It's just a very complex thing to calibrate.

Erin Mote

I say yes to all those things. I mean, any good technologist starts with what's the use case? And so like, I actually don't think that's an AI specific thing. I think anybody who's building good tech that's going to survive, knows what their core use cases and understands who their core user is, that's not any different. So when you're thinking about integrating AI into your tools, Does this meet a core use case? What is the core use case you're trying to address? Who is the

core user? And I think that is sort of like not just a good principle for using AI in education. It's just like a good principle for any sort of tech build, ever. And so like, I don't think it's super different there. But that efficacious use of AI is really important, which is why efficaciousness is the last principle of the safe framework from the AED safe Alliance because, again, you know, safety is first that S is for safety. The A is for

accountability. The F is for fairness In transparency, and then we get to efficaciousness, you better have S, A and F, before you get to E, right? No matter what you're doing, you sort of got to do the table stakes. And then you got to think, is this the right application of AI? So I'm super excited about the productivity applications of AI in education. Like, listen, education is always going to be a human enterprise, no tech is going to ever replace a great teacher.

But we can actually make the profession more sustainable. And I've watched this like happen, you know, when I watch a superintendent entering their bus routes into a code Gen data visualization, and then being able to quickly consolidate their bus routes and replay it blast route in like seconds. Like, there's some really cool productivity applications that

are the now. And then there's some that are the next right one of the flagships investments of the Institute for Education Sciences, and NSF right now in education, and AI, is a universal dyslexia screener. As a mom, of a kid who is dyslexic, it would have been awesome. If Robert would have been able to have a dyslexia screener in first grade, it took very engaged parents fighting for him to get screened for dyslexia for

more than a year. And it's because screening for dyslexia right now takes a lot of teacher and educator resources. And imagine if we could use AI to screen all kids for dyslexia, before they're in second grade, first grade, second grade, because we know that those are like key early intervention years. So imagine we could do

that. But for the 20%, or 25% of kids who tick off the AI, right, and it says, ooh, there might be something here, there's a human who goes back and does a traditional dyslexia screening, but it takes the 80% productivity off the teacher, that work is being developed right now by the Institute of Education Sciences, and NSF. For me, that's a super compelling

and exciting use case. And the mom was safe spaces, you know, with the Global Education testbed network group, which is a global association of those of us who are working at ed tech, at the intersection of sort of Ed Tech and inclusive innovation. We've released a set of principles around more principles I know, but a set of principles around what a good trialing environment looks like. And that means, you know, a good trialing environment really does depend on your use case.

Sometimes that means you have to have students in the room working with the technology, because your primary user is students. When you do that, make sure you're complying with the acceptable use policies of your governing LLM. So I just want to put it out there for our developers right now, that kids under 13, right now cannot use large language models and be in compliance with federal law 13 to 18, you need parental

consent. I know that that's not always happening in our classrooms right now, and not always happening in our districts, and not always happening in our tools. But as developers, we have a special responsibility to make sure that our school districts are aware of those sort of table stakes and those rules of the road. So just naming that. So how you navigate that that means affirmative consent is students are using these tools and are

using large language models. And so those are just some of the things I would do. If I was sitting in the seat of a developer right now, I'm always gonna feel like I'm sitting in the seat of a developer, because it's very hard for me not to think as a developer,

Ben Kornell

that's what makes you such a special advocate for the tech sector, because you not only advocate for kids and families, and especially for those who often don't have an advocate in the room. We also understand from the developer community, what the needs are the practices and principles, so that we can actually build the products that meet our intention, which is to transform outcomes for all kids. So it's

so inspiring to talk to you. You are truly a bridge builder and inspire us at edtech insiders. If people want to find out more about this particular initiative, as well as the broader work of innovate edu, what's the best resources or place for them to go?

Erin Mote

Yeah, so you want to find out about that sia principles, you can just go to that sia website, and they are there and you can explore sort of what the principles are and make sure that you can understand what those seven principles are. And I know there's also some great use cases from a tech companies right now, like all here and others who are doing this work.

And you can really learn from models in the field, our friends at Renaissance Pearson, they're really doing some incredible work, go guardian, another one who's really thinking about this in a proactive way. So learn from your peers, go to that sia website to navigate to those

principles. If you want to learn about the ED safe AI Alliance and the ED safe AI framework, which again, I think is going to be the dominant sort of policy framework in our space, you can go to Ed safe ai.org You can also go to innovate edu nyc.org. And click through to get there. If you go to Ed safe ai.org, you'll also be able to learn and see some of the resources that some of our steering committee partners are developing folks

like Kosan. And folks, you know, who are really trying to meet the needs and sprint to meet the needs of the field right now cluding, our friends at ISTE. And then the last thing I'll say is the Policy Lab work for me, where we're working directly with 12 districts over the course of the next year to build what I call a policy stack, like a little reference to a tech

stack there. A policy stack, in an open science way that districts across the country can learn from each other can have some starter dough as they're working on developing their policies around AI literacy or around acceptable use policies, or frankly, how to navigate some of the state level privacy laws that are going to intersect with

development. Here, we've deliberately pick 12 districts that are geographically diverse, that are politically diverse, that have a diverse set of student populations and a diverse set of use cases. Again, look at me just going back to my use cases, adult development principles at the heart of the work we did in video. And so follow that work because I think it's going to be like v one v two V three V four. I mean, I

think it'll be like v 50. Over the next year, because the environment is changing so fast, and districts are going to need to respond to what's now what's next and what we can't even imagine

Alexander Sarlin

districts and tech companies as well as the policies plan. It's gonna be really interesting. Thanks so much Aaron mode from innovate. Edu part of SI A's steering committee and group of amazing edtech companies who just put out the principles for the future of AI and education. Thanks so much for being here with us on Ed Tech insiders. Thanks. Yeah.

Ben Kornell

Well, that wraps our show. Thanks so much to Aaron Mote for joining us today. Thank you for listening. Remember to check us out on the podcast, the newsletter, our events, you can find links to all of that at ed tech insiders.org. If it happens in ed tech, you'll hear it from me from Alex and our entire team at ed tech insiders. So great to have you with us this episode. Bye, everyone.

Alexander Sarlin

Thanks for listening to this episode of Ed Tech insiders. If you liked the podcast, remember to rate it and share it with others in the Ed Tech community. For those who want even more Ad Tech Insider, subscribe to the free at Tech insiders newsletter on substack.

Transcript source: Provided by creator in RSS feed: download file
Week in Edtech 10/30/23: Biden's Upcoming AI Executive Order, Edtech Events, State of AI Report plus Special Guest Erin Mote of InnovateEDU | Edtech Insiders podcast - Listen or read transcript on Metacast