Welcome to Edtech Insiders where we speak with founders, operators investors and thought leaders in the education technology industry. And report on cutting edge news is fast evolving field from around the globe. From AI to xr to K 12 to l&d, you'll find everything you need here on Edtech Insiders. And if you liked the podcast, please give us a rating and a review so others can find it more easily.
All right. Hi, everyone. It is the Week in Edtech. We are into November and we are kicking off with a bang, nothing like an executive order Alex to get things started, we are going to break down today the executive order around AI coming out of the Biden administration will build up from the bottom. What does it mean for AI? What does it mean for data privacy? What does it mean for the collective tech systems? But then also, what does this mean for education and tech? Before we dive too far
into that? What's coming up on the pot?
Yeah, so I mentioned last week on Anurupa Ganguly from Prisms, VR we have talked to and that episode is coming up very shortly. The episode out this week is with a really interesting company out of Singapore that does this really unusual and sort of interesting take on AI tutoring. It's called Noodle Factory, a really, really cool sort of husband and wife, edtech couple that have put together this really cool and very interesting
platform. And they're really at the heart of Asian ed tech, which is something we've been thinking a lot about recently. So I highly recommend that one. We also have an episode coming up with Shiren Vijiasingam from Instructure. He's the chief product officer of Instructure. And one of the small group of people who has been really leading the charge on their AI platform. So all sorts of cool things happening there.
Yeah. And on the event side, we have coming up on Wednesday, the Stanford accelerating edtech IMPACT Conference that we're co hosting. With the Stanford accelerator for learning, we're gonna have 350 of our best friends. About a third, our students about a third, our faculty and professors from Stanford and a third are members of the ad tech community, whether in companies or in schools and districts, it should
be a real blast. And then we also have a follow up from our online ai plus edu conference, where we'll be sharing some of the insights some of the videos. So for those of you who can't normally make it to an in person event, be sure to check that out. Yeah, just speaking
of videos, this has been a great forcing function for us, we're actually finally launching our Edtech Insiders YouTube channel this week. And we're kicking it off with a playlist of all of those amazing videos, those 45 speakers that we had, speaking at the AI plus edu conference, but we're also going to try to do a lot more video podcasting and various types of video work. So that should be a really interesting and cool expansion of what you've seen from us and keep an eye out and check out
our YouTube channel. Well, we're excited
to have you there. Last, I would say we'll have an interview at the end of the pod today, it'll be Christian visor from learn XYZ, check out his crossover, kind of, instead of edtech. Think of learning tech, his cool app that basically allows curiosity to drive the learner so learn XYZ coming up soon. All right, we're gonna dive in. So last week, we kind of had, it was like a slow motion explosion in terms of AI
regulation. It was one of those where at the beginning, it was like, Okay, this seems relatively innocuous. And then as you started reading, the more detailed documents started, you know, sifting through the different interpretations and takes you started realizing like, Whoa, this is a potentially big landscape shifting moment. What was your initial reaction when you heard of the executive order? And, you know, at a high level, what were your initial takeaways? And then
we'll dive into it, Alex? Yeah.
So at the highest level, you know, we've been talking on this podcast for quite a while about how Europe and China have really sort of been I consider them pretty far are sort of ahead of the US in terms of their public stance on AI regulation and sort of giving a little bit of an idea of where the laws may go privacy law, protections of data and things like that. China did this watermarking law, which we've talked about a lot, and we just didn't see anything like that
coming at all from the US. So my first instinct was okay, this is 110 page, document it it covers a lot of different areas of AI.
And I think that there was A pretty activity sort of heard this in the way Biden talked about it, when he put it out, there was a little bit of like catching up like we're getting back on the horse here, we do want to be like a global leader in AI policy, we don't want to sort of be totally behind the eight ball on this or sort of lead companies run rampant over us, we do want to have some really clear stances. And I think that was sort of the overview. When you actually dig
into it. I'm not sure that there is that much actual regulation in there as much as sort of a lot of directional telling all the different departments in the federal government, how they should start regulating, should start putting some things down on paper and some ideas and some standards and some laws. So it's almost like a prequel to the actual regulation. Yeah, I
feel like the executive order was like, Hey, everyone, we need to get some regulations. Let's do it. Yes, I also, you know, just for people to zoom all the way back. There's this profound sense of missed opportunity with social media regulation. And the sense that, you know, social media has run amok and that, you know, many of the negative consequences were known in the early days and weren't properly
manage. And then you have crypto, which was kind of the web three movement, there's been a lot more conservative, thinking about crypto. And if you look at the way crypto has taken off in other countries in the US have the regulatory concerns have been around consumer protection type issues, and not really about the underlying technology and making it safe. And then of course, the champion for crypto sambangan. And creed was just convicted recently. So there's the sense
that this is another wave. And that this AI wave is potentially profoundly important and impactful on not just businesses and commerce, but on human beings in general in society.
And so it does feel like they've kind of ramped up the rhetoric to say, Okay, we are going to step up, we are going to do some regulations, I will also say, the order itself, and we'll get into it feels like a patchwork quilt made by 1000 different interest groups, whether they're special interest groups, or not special interest groups, their interest groups. And so it feels a little bit clunky in terms of every section has a little bit
of a different flavor. And ultimately, it calls on all agencies of the government to essentially have an AI plan and AI policy, which also seems a little bit like overwhelming if every single agency is going to come up with their own policy, how are we going to navigate that coherently? So let's start first with the AI portion of it. And then we'll kind of ladder
our way up. So just to get started, in section three, they have the definitions, and I always, I love to see how do people even define artificial intelligence, and they defined as a machine based system that can for a given set of human defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments, artificial intelligence systems use machine and human based inputs to perceive real and virtual environments, abstract such
perceptions into models through analysis in an automated manner, and use model inference to formulate options for information or action. It's going to be a very boring podcast, if we read line by line, I will say, if you were having a hard time following it, I was having a hard time following it. And I've been reading it many times.
Basically, this definition is so broad, you know, basically, any machine based system that can make predictions, recommendations or decisions, are you saying like Yelp is AI, or that, you know, Google search
is AI? And it goes in later and defines what models, you know, at what thresholds arbitrary rules after report, but I actually think that there's a concern that how they define AI, is so broad that it could basically mean any software because everyone I know that's building software, they have the ability to do you know, some even rudimentary level of predictions, recommendations, decisions, advanced analysis that is setting the user up, or maybe it is the developer up to
better deploy their software. So I'm really concerned with how broad this potential scope of legislation could be, if they stick with this definition of AI, which is basically all software. Well,
the definition that jumped out to me and I think it sort of helps, I think fill in some of the gaps in the definition you're saying, which is very, very broad, is when they talk about this concept. I've never heard this phrase outside of this document, but dual use foundational models, and they're basically I Think saying yes, my guess is that they're defining it broadly, because nobody has their mind
completely around. And there are all these pieces of this legislation where they say things like, you know, in fields like health care, finance, education, housing law and transportation, you're like, that's like, I think there's a little bit of sense, like, they don't want to box themselves in and say, Oh, only AI that uses this specific, exact thing, because they know it's going to change, it is already changing
all the time. And I think it's meant to be sort of a broad, I'm giving them some credit here, but like, meant to be a broad brush in terms of what AI is. But then they try to get pretty specific about these dual use foundational models, which is all the big models that we've been talking about for you know, this entire last year. It's the GPT, and palm and llama and all
of these things. And it feels like there's this sort of dual layer of thinking here of you know, how do we regulate the models that underlie everything. And we'll talk about some of the things we're gonna try to do. And then there's the like, how do we regulate all the uses that are going to be built on top of these models, and even not on top of these models on open source and other things? And, yeah, it's a very broad brush.
And it's true, you know, given that this is heading towards legal policy, I hear what you're saying there is a case to be made, I can see lawsuits coming out where people start talking about things that like Netflix, Netflix has been a recommender system, which is AI based for decades. Is Netflix an AI company? Is it an AI tool? Like I don't think most people would consider it that but under this definition, yes, for sure.
Yeah. Well, if you look at most assessment systems, or let's take Duolingo Duolingo, actively using AI, I wouldn't consider them an AI first company, I would consider them an ad tech company. But if you're using software in the way that it's described in this document, you are very likely to be subject to some of these regulations. And some of them are very intense, but you'd have to be very large, and some of them are a little bit more lightweight. Like let's take for
example watermarking. So the idea is that basically anything that AI has been used to generate that object, so if it's like a video or a picture, or maybe even like a little bit of texts, to basically have the ability to say this was generated by AI, but you know, that is very binary. What if you're using Adobe Photoshop, and you're just taking out the background? Now? Do you need to watermark the entire thing? What if one pixel of all of the
pixels was generated by AI? Or what if one sentence out of entire book was generated by AI? How are you watermarking? This and I would just say, like in education context, I think it is very common that the first thing that schools, universities employers want is disclosure of
the use of AI. And that becomes really, really problematic given that AI is infused into everything that you, you might even have a product that you don't use any what would qualify as AI, but someone else has used AI, and you need to represent that it was aI generated this, it just starts being incredibly complex and compounding on itself.
You know, and especially in an education context, because we're already seeing people use AI in so many different contexts within edtech. I mean, they're using it to create content. And that can mean the whole content, like you're writing a whole textbook or a whole, you know, graphic, it also can mean that you are taking your existing corpus of material and asking AI to write summaries, or to create images or to create test questions
based on it or flashcards. And in that case, yeah, what are you disclosing? Is it that, oh, AI was used in the generation of this product? Or do you have to go line by line, I had an interesting conversation with my wife about this, who's a creative director who works with a lot of design apps and graphic apps. And she was saying, what, you know, it's now I mean, for one thing, this world has completely been changed by AI, you know, people use mid journey and Dolly constantly to generate
all sorts of images. Of course they do, right? Because it's free, you can do anything you want with it, virtually free. And proprietary, when you generate an image through an AI, you know, you can use it for something commercial. And so she was saying, you know, everybody is using it for something, one element of an image like you're saying one equivalent of a pixel may be right, one little character in the corner of something might be generated by
AI or edited through AI. Does that mean that you have to say that this entire image was generated by AI? And then with all the whatever ramifications that would provide? So it is a tricky thing. That said, I think that a lot of you remember and I've been a fan of the watermarking regulation for a long time and for two reasons,
right? I mean, I think one of the scariest things about AI and this does come up in this document A couple times, at least politically, is the concept of people not being able to know what's true or not sure what's real or not real. They specifically talking about things coming from the government, like, what would happen if somebody fakes a Biden speech about starting a war with Mexico? What does that mean for
the world? Like, if they're not liable for doing that, if they can do that, and it's just like, hey, I used AI, and I used 11 labs to make a voice and who cares? That's like you're creating, like, even more disinformation or misinformation or sort of chaos world than we've already had. So I think there's this worry about deep fakes is this worry about sort of epistemological chaos, and watermarking is one way to at least begin to provide against
that. The other thing that I think is important is knowing whether you're talking to a human or an AI, basically, you know, do AIS have to inform you that they are AI is if it's a conversational bot? Yeah. And I don't know if I saw it might be in there somewhere. I don't know if I saw a line about that directly. But I think that they're queuing up these government agencies, especially the security agencies, and cybersecurity and things, to start to really make laws there.
And honestly, I think that's good. Because, yes, they were calling out social media, we've been talking about this for weeks, we're at a moment where 40 states 41 states are suing metta at the same time for like,
destroying teens lives. And you know, that 1996 law, we talk about it a lot, it's almost 30 years ago now, set basically created a world where all of these social media and E commerce companies, and Google could just like, do whatever they want, they take over, never, never lose any lawsuits against the government just
really run rampant. And I think this is a proactive action to say, we do not want to wake up in 30 years and realize that, you know, the same companies, and maybe some new ones, just took over everything. And our lives are in utter chaos, and we just couldn't protect our people against it. You know,
I love your optimism on that. I think, you know, I'm gonna take it the other way. If you look at how this document was written, overall, it was written to protect the large players who've already taken over, which is open AI, and also Google. And, you know, they make the barrier to entry. You know, a couple other things. Just to quickly mention, there's watermarking, there's also you have to report if a foreign national is generating or using a model.
And, you know, this is specifically an attack on open LLM. Because this idea that, you know, open AI, they can report it, they'll have the infrastructure to say you're all my foreign nationals and visa holders working on LM. But open lol sounds scary. And then you also have these red teaming processes, which basically, it's not an outcome based, it's more process based, like, here's what you have to do as an infrastructure to make sure that
your model is safe. Well, open models by nature don't have that. And so, you know, unless there's a red teaming council that they're going to offer everybody, what you're going to end up having is a government approval process for many of the open LLM that will essentially quash that open alarms. So and coming back to the watermarking stuff, I think you're just going to end up having a byline on every picture, this picture may or may not have been created
with AI. It's going to be one of those meaningless, like legalese things that they post everywhere. Because it you don't know what was actually generated by AI or not. And it just it blends?
I don't think so. I don't think so. And I'll tell you why. I think that the watermarking stuff is in this context of the federal government, I really think they're doing it and even that foreign national stuff. You're right, that from a business standpoint, if you're open AI or your Google, and you can say, Okay, we have all the disclosures, here's the people who worked on this and blah, blah, and we can you know, we have all the our paperwork in
place. And yes, the Falcons of the world and the things based on llama, there's, you know, hundreds now on the hugging phase. I can't necessarily say that. But there's also, I think, a pretty decent case to be made that if the open source models truly one, right if literally, anybody can just grab an open source model off of hugging face and use it to generate anything and not have any laws or regulations that basically, you know, say that creating that anything can be a problem. That
is actually a path to chaos. And I'm not saying this as a protectionist piece against open AI or Google, I'm just saying that they have to put some things in place to say we need some insight into what's happening in these systems, to be able to keep track of them to know
that it's a false argument because the systems, you have access to see how the open system works, and to tune it however you want, like the government could review every one of the open systems, the ones that can't review, or the closed systems like Google Gemini and open AI and why they're creating these regulations? I don't think so I don't think these open systems are that transparent. Certainly could Red Team them, you know, they could create their own red teaming, but you could
just fine tune it. I mean, the open models are by design. I mean, this is what's interesting about them, and it makes them innovative. They're by design, incredibly easy to adapt and change. And, you know, yeah, grab llama to off the internet, and then say, Yeah, but I want this and that
this regulation threatens to kill all of that, I know
that there's a reason for that, I think, I mean, look, you're gonna have it me this was gonna happen, it's probably already happening, right? You're gonna have people download llama, two off the internet, and then use it to start polling, to train a model to be able to suck, you know people's data off the internet and personalize a campaign to, you know, steal from them and send them constant emails, telling them that their family is in danger. And it'd be in the
voice of their family. I mean, that's gonna happen, this really
weird stuff is gonna end there's already laws for all of that. No, there are not there's law for impersonating somebody else. That's fraud, like what you're talking about is fraud.
Ai makes there you can take it in so many directions that nobody's ever planned for. That's what I think they're trying to protect against on that. So I mean, they're talking about the NSA. I mean, it's like, I don't know, maybe I'm giving them too much credit here. But I think they're paranoid. Because they've been talking to security experts about what could happen and the security. Anything could happen, like this is going to be nuts. Yeah,
I'd say one, like just stepping back, who are they listening to? And what are they most concerned about? They're concerned about democracy and misinformation. They're concerned about our national defense, and other people, you know, using our AI against us or their AI against us. They're worried about cybersecurity, cyber attacks, and all of those things. I totally, wholeheartedly agree with you. And generally, the move of government then is to squash innovation, and put a
bunch of regulations on it. And you can see that they're attempting to do so the challenge is they also have a, they have a dual mandate. It's the safety and innovation, right. And they're trying to
steal foster innovation. And I think that they're doing it in a poor way, in that they're trying to regulate this, like the MPAA rates, movies, like the idea that you could have a model of a certain size, red team and say, This one is good, does not acknowledge the fact that every model is constantly evolving. The only model that doesn't evolve is the one that because of government regulation, they freeze it in time and say, Okay,
you can look at this one. And what I think is going to naturally happen then is the open models will go underground. And it will be a lot harder to regulate what they should have
done. Instead of process regulation, they should have done outcome regulation, where instead of mandating processes, mandating disclosures mandating all the throughput items, there should be essentially a group of people testing all the models and given warnings or shutting things down that are dangerous, dynamically, it should just be out. There, government is very
good at regulating outputs. But there's a way in which they've already kind of determined, there's like a pre crime here, somebody has like, done something wrong, and now we're going to regulate you. And that's not actually what happened. I was listening to a podcast the other day, and they're like, this is like Minority Report. And people are getting convicted of a crime
they haven't committed yet. And if you actually just stood on, let's say they were on hugging face, just evaluating every one of those models with red teaming constantly, continuously. That would be really effective. Who would do that? Well, my view on all this, is that okay, so basically, Biden says, every branch of the government needs to have like an AI hot team. And then they're gonna come up with regulations. And by the way, if you go to ai.gov website, they're doing this talent
search. It reminds me of like in the 80s you know, how they do like talent searches where people would sing and dance. Well, this one is for AI talent. They just need more AI talent and government brochure, the idea that we're gonna have this like patchwork quilt of agencies coming up with regs, I think, everyone pull their hair out. So eventually we're going to have like the SEC, but for AI. I think that that's what's going
to happen. And basically, that centralized agency should get inputs from the other agencies. And then you know, my thought would be, it's not actual people red teaming everything, you build models, that red team, you know, you build AI to Red Team AI.
So I get that I'm trying to wear my innovation
hat and my regulation hat. At the same time, you know, every safety hat at the same time as I read documents like this and think about them, and I hear you, I'm trying to channel the responsibility of a federal government that has as look social media, and web platforms, things like Facebook, back in the 90s, did not seem like they were going to be a threat to all news, they didn't seem like they were going to be a threat to election law, they didn't seem like a threat to mental health.
Like, at the time, it was literally people posting, you know, pictures of what they were eating, or family updates are like it was so early in that world. And they made this blanket mistake, I think it's a mistake, where they basically said, you know, what, if we make these companies responsible for the content that's on them, it's going to just totally put the onus on them to have to monitor everything, it'll be impossible
to do. And we'll just lose the ability to have these marketplaces of ideas and these central town squares, whatever
the metaphors were. But what they didn't take into account was that there's company I mean, the Amazon for covers, Facebook, and then Instagram and not Tik Tok for social media, Google for like information of all kinds as well as email, like, they didn't take into account that these companies are I mean, by nature, corporations, by nature are just ravenous, they're going to keep moving, they're going to keep finding new things to try and do when Facebook went into
newsfeed. It was because they were like, look, we need more content all the time to be able to keep people coming back all the time. Because that's how we make money. We're ad based. That's how we do it, we got to get people coming back, who cares. And they went into news.
And by all nature, the entire news business was like, you really shouldn't do this, this is a very bad idea, not just for themselves, they're like Facebook, you are not going to be happy with this, that you are going to own the news like, it's, you're not going to want to do this, you're going to be the sensors for the world. And they did it and they fell into
every trap, you can imagine. And I think that what they're trying to do with these laws, is try to get ahead of a unbelievably unknowable future situation where bad actors can do more than they've ever been able to do before, like literally more than they've ever been able to do before. Like, where foreign governments will be able to do more than they've ever been able to do before, where it's really a dangerous situation. So they're trying to be innovative.
They're trying to say, hey, look, let's recruit great AI talent from all over the world. Let's train people for the next set of jobs. Let's get the NSF involved. But like, I think that if you really were to like corner, some of the core writers of this executive order, I think the thing they're really worried about is losing control. Is this technology, really messing up the world? And mostly because of the sort of American hyper capitalist, you know, engine
behind it? And yes, I think I know, the buzz in the valley is like, look, this is actually good. I mean, you see, Microsoft and Google reps come out and say, This is great. This is really important that we're doing this. And that's very suspicious, right? They're not angry yet. But at the same time, like, I don't think you can, the suggestion just said, which I know is the right tech solution, right? Dynamically monitor it, because it changes all the time read team, these models, as they
learn as they change. That's the same thing that Facebook has been trying to do when they say, Hey, you're not allowed to have horrific things on Facebook, horrible violence, horrible pornography, and they literally are hiring 10s of 1000s of people all over the world who are then burning out and like getting suicidal to dynamically monitor this stuff. Like, that's where we're going with this. I really do think so. And I'm a huge optimist on AI. Don't get
me wrong. I mean, you don't have to just be like, this stuff is definitely going to happen and the role of the federal government, they have to get into like to the foundation of it, they have to prepare to like, really locked out. So all this like weird foreign national
stuff. Yeah, it's not great for open source, but it's also to protect against, you know, some really strange people in countries all over the world, like doing stuff that you can't imagine the equivalent of, you know, the Russian misinformation campaign that arguably changed our presidential election. Like, you got to get ahead of this, you know,
I hear you and before we pivot to like, what does this mean for education? I guess, when I add it all up, the definition is so broad that basically they're regulating all software, which by the way, software has never had a regulatory agency. So number one, your regular things, all software because all software inevitably will have at least component part AI in it. Number two, yes. You're creating During throughput regulations, not
output regulations. And those throughput regulations, which are about the process, how it's made, how you report like all those things, favor, centralized, controlled large incumbent companies versus Dean enterprise open source. And I think that's, there's a fear of the open source stuff. And I think that that is an early
reaction. Third, foreign companies are going to do what foreign companies are going to do, like anything that we put in this regulation here does not affect what the large language models do elsewhere in the
world. And I think that there's some really naive thinking that the LLM industry, the AI industry is going to be firmly planted in US shores and our regulations are going to make the rest of the world's regulations, I actually think that in many ways, wherever we close the door, or wherever we draw a line, that's going to open up for another country to say, hey, we don't have that
line, we don't close that. And so I think there's some time there to have the companies on your own soil and being able to look them in the face and work with them through, you know, dynamic regulations, then overreach. And then my bigger concern here is we're in like the first inning of AI. And already we've got a kind of hodgepodge Executive Order, which, by the way, if there's a new president, it could all change, it could all go away. Number two is, it's basically
calling on every agency. So it's incoherent, it's going to be very hard to manage. And then third, if I'm sitting with my school board hat on, it's just going to make me want to do less, like leaning less on AI, it's gonna make me even more like, pull back, because the government is telling me, this is scary, we have to regulate
it, we've got to go slow. And that's just not what is happening in the world of kids, like the world of kids is accelerating really, really fast for the world of young learners. So that's probably a good transition for the education side, there's both like, what does this mean for the education sector? And I'm curious, also, like, as you put on your corporate hat, higher ed hat and K 12? Hat, how do you think this executive order plays into the
space? And then also, there's a lot here in the executive order around what the education department needs to do? And what essentially the education industry approach will be to this? So let's start with how do you think the EO is going to impact AI in education, generally speaking? Sure. So
a couple of just quick things that they sort of glance off of, in this document about education, they talk about, you know, quote, to foster a diverse AI ready workforce, the director of the National Science Foundation shall prioritize available resources to support AI related education, and AI related to workforce development through
existing programs. So this, this, this idea of staying competitive, as a nation of trying to, you know, lean into AI education and AI jobs of the future and a recognition that that's there and the whole document is also infused with the language of you know, civil rights, they it comes up consistently, this idea of, you know, this should be equity based, they don't want it to be, you know, only for that the haves and have nots, they also talk about a program to identify
and attract top talent in AI at universities overseas and research and private sector, but there's this very overt thinking, and I think this goes to your sort of idea of the Talent Search, the 80s Talent Search, they're basically like, this, is this technology of the future. Yes, we do have an advantage on this, but we have to stay competitive with China, to a lesser extent with Russia, to a lesser extent with Europe, because they're also going to be
highly regulated. And who knows where else maybe Brazil, you know, becomes a huge AI hotspot, or Columbia, but like, there is a feeling of, okay, you know, no matter how much chill we may put on this, from a regulation side, we also know that it's the future, we have to stay on top of it. There's an education lens there. They call on Secretary of Commerce and the Secretary of Education to expand education and training opportunities to provide pathways to AI
occupations. So you're I mean, you're gonna see things like more, I don't even know exactly how this is gonna pan out, but higher ed will sort of feel like they have a clear mandate. And there may be all this money there, you know, federal government funds or various types of programs for state or for higher ed in various ways to develop AI programs. That's good, that's should happen.
There's also the idea of, you know, consumer protection in the context of education as well as finance, healthcare, all sorts of other things. So education sort of comes up in a few different ways that are sort of important but not about our education, our formal education system more about, you know, AI training, and AI, you know, as it relates to kids Gen. Really, which is a big deal? I wish to hear your thoughts on Yeah.
I mean, this idea of an AI toolkit is really central part for K 12, at least Yes, exactly. And this idea of the Department of Ed creating that AI toolkit for educators, I actually think is one of the most coherent parts of the executive order, because we're just navigating this incredible gray area. And we know that for kids to be successful, they need to understand how to leverage AI, and how does it work? How does it represent things? And
how can I be metacognitive? With it, but also, how can I leverage tools with AI to better, you know, build skills and do the jobs of the future? So actually, I think that was the part where I was like, yes, let's work with Secretary Cardona. And, like, get these things done. And some of the other incentives, I think, that you talked about, it's always tricky, with an executive order. There's no budget authority really, for
like large budget items. But it does seem like really in keeping with what schools are going to want to do anyways. And I could totally see, like an AI readiness bill that could really, you know, replace SR funding or is probably with higher ed and K 12, you know, an opportunity for us to invest in, in STEM education. That said, I think that the main implications here that I saw, were really around some of the privacy provisions. And this idea of
like a privacy review. And basically, if you're a large language model, and you meet their threshold, so we did a little math before the show, and it basically whatever open AI is, next model is and whatever Google's next model is, maybe even Gemini, they will would be, you know, they would have to be regulated by this executive order, and would have to do reporting. They have to do a privacy assessment regularly.
And then they also are going to be obligated to do privacy assessment on vendors using their platform. So it raises Question number one, are the LMS liable for use cases that people build on top of it? Or, you know, are the use cases on top of it liable for the LMS? That is a very interesting question. Here, but it's talking more about process again, red teaming and privacy reviews. But let's say open AI says, Okay, well, we want to do a privacy review on this k 12. thing. And it's
subject to COPPA. If that company hasn't signed GPAs. Are they you know, data privacy agreements with districts? Are we comfortable with this company? And, by the way, there's this nuance of, like privacy reviews? Are they reviewing just how I'm using your model? Or are they reviewing all the use cases, given that AI is infused in everything, so I think we are at a real advantage in that tech, because we've had to use de identified student data for a
long, long time. And by the way, if you're listening to this, and you're an ed tech person, and you've been doing de identified student data for analysis, you may want to jump to open AI and, and Google Gemini, or one of those companies, because they're going to have to do that with the general population and segregate data identifiable data better in the LMS. But it also means that those companies in the education space that are really good at this, you're gonna get the green light on
these privacy reviews. But if you're not demonstrating that, I could see the LM saying, we don't want to approve you, as a third party user, I think that is going to be a really, really important area to watch. And I think it's something that also school district leaders, universities, all have a strong sense of data privacy, and it's important and avoiding data breaches, and, you know, fair use and all this kind of stuff. So I think that's one for us to
all really watch. And it's also interesting to think about which agency is going to govern that, because some of this is really falling into like cybersecurity land for the general adult. But COPPA, for example, is mainly, you know, Department of Education. So, very, very fascinating on the privacy front.
I love your point about Ed Tech has had to do this for a long time. So there may be some really interesting synergies there. And I think, you know, when we first talked and you've said this for years, health and education have a lot of things in common, I think both health tech and Ed Tech, some extent, have had a lot of time to do de identified, aggregated, all sorts of pretty specific, careful nuanced, working with security and data for prioritized populations or
for very sensitive data. I think that will be an advantage. I mean, the point you're making I think, is incredibly important whether The MLMs themselves are on the hook for downstream effects of what happens when people put the LLM into a product into the market. Or whether the the middleman so to speak, or the you know, the company itself is liable is one of the biggest questions here? And I don't think it's answered here. I think you're talking about it as if it's the LMS
aren't going to inherit it? And I think in some ways that's possible. I'm not sure if it's even I don't really
sure. Yeah, I don't know. What do you think it should be? Alex, this
is, I think one of the trickiest things
about this, right. Because if you go to that same metaphor of the 1996, social media, you know, this sort of platforms law, and say, hey, you know, Facebook was not responsible for what was happening on Facebook, and that had all these sort of negative effects that allowed them to sort of keep, you know, innovating and moving, but also keep putting things features out that sort of, they weren't responsible for the downstream effects of and we've got to this crazy place, based on that, you
might argue that the LMS, should be responsible for the downstream effects of their product. And that if you know, 100, edtech, companies go to Open AI and say, We want to use chat to BT in an educational context, and this is how we want to use it, and this is going to
work. And if they sign off on something, which is then used in some really problematic way, like, there is an argument to be made that you know, they are a company with huge resources, they're getting unbelievable amounts of investment, that maybe they should have to take on that huge regulatory burden, and all the lawyers and all the all the reviews, and all the, you know, redlining and everything, rather than each of these smaller companies that is using their API's that makes
sense. On one level, on the other side, you might say, Well, the problem with that is, this is a company that is going to put out models that are used in every context, you can imagine which all have, as you say, separate regulatory bodies, separate agencies regulating them, like, you know, you're gonna have drug discovery companies, you're gonna have, you know, diabetes companies, you're gonna have companies doing, you know, things for children outside of an educational context, right, like
chatbots, for entertainment. Every one of these has totally different laws. And so it seems a little nuts to ask one company to oversee all downstream uses of it in every context, right. So I think that the workplace, it's going to land eventually, I might be wrong, but is the middle layer, I don't think that the LLM companies themselves can truly be responsible for
everything all the way down. But I think the thinking in this document is, you know, they call them the dual use foundational models, like they're the like
heart of the matter. And I think what they're trying to do is say, We from now on, we haven't done this in the past, but from now on with new models, we want to be deeply in the conversation as the federal government about the foundational model and making sure it's not biased and making sure that it's privacy, at least as a foundation of some
smart things. Then, from there, we sign off on that it's not dynamic, as you're saying, but you know, we sign off on GPT, 4.3, whatever it is, and from there on, everybody who then goes and takes those API's, they open AI is now protected, because they've gotten their model checked. But everybody who goes and takes it, and then fine tunes it and trains it and maybe puts new data in it because they will, and then they use it in a
particular context. I think those companies are going to have to inherit some of the regulation, there might not be that much regulation, because the government just doesn't have that many resources to do it, but like to enforce it, or to do consequences for it. But I think there's going to be have like two layers. So there's gonna be the like regulation on the foundational models, I call
them. And then the regulation on the downstream use cases in the context of all these different military, transportation, energy, whatever, all these different things. That's my guess.
I mean, this is exactly why the EO as written was poorly conceived. Had they just narrowed it and said, this is about the LLS. That's what this eo is about. And then they did another like they did a sequence of executive orders. The next one is about using artificial intelligence in the military. And the next one is about using artificial instead, it's so broad brush. It's so comprehensive, because like you said at the beginning, they're trying not to box themselves
into a corner. And what is interesting, they do call out like biological. Yeah, do as a separate category is very interesting to read their fears of by unscary AI, but I think that actually gives you a sense of what this EEO could have been had they just taken it a few building blocks at a time rather than trying to eat the whole thing at once. And I get it they're trying to create an umbrella and then mandate each department Right,
exactly. I mean, that's what I'm thinking is it's like I think this is meant to be that this is for now what we think we're going to do for regulating LLM we're going to have this red teaming thing. We're gonna get involved when it reaches a certain size, which it's close to. And then they say, okay, and by the way, the NSF and the SEC, the Department of Ed and the Department of Defense, and everybody else is going to come up with stuff later,
I would just would have said, I think they could have made it a lot shorter and a lot less confusing. If they said, this technology is going to be transformational. We're excited about the upside, but we're worried about the risk. And then the other thing is to paint a picture where you basically have government approved or sanctioned models. And then you have non approved non sanctioned models. That's
totally fine. You know, and, by the way, like, the FDA approved some things, and they don't approve other things, it's out of their jurisdiction, and you can try it or you cannot, you know, and if you think it's going to help you, fine. And by the way, being in education, our view is like, actually having a government approved model would be helpful to us, because then we could say, hey, school, hey, university, we're using a
government approved model. But they kind of go the way of, I think, like overreaching in terms of regulating how these models are built, what is in the models, rather than saying, you know, here's the outcome, we're going to have a couple that we really, that they're going to have this extra layer of rigor. And by the way, if you look at how Europe has done it, and I think Europe governed some of these things better, in part, because they're all different
countries. So they realize like, we can't make the hyperlocal law. So we're going to have more of a framework of priorities. And then we're going to be like, certifying things as good. And basically, we're going to create some guardrails on security. And if you look at GDPR, for example, it's pretty much been a success in terms of how they've set rules and regulations. But it's been broad enough that anyone can operate and do business in Europe. I feel like we are missing that opportunity
to do the same thing. All right, well, we are running low on time, Alex, what are your final thoughts, takeaways on the EO that you'd leave and maybe even some recommendations for President Biden and gang?
Yeah, so my last thought that I think is worth looking into if you're reading this and trying to really anticipate what's going to happen in education. First off, you have to remember that the Department of Education is very limited in what it actually can mandate in the US. Education is very state centered. So when you get to the education section here, there's really two things that they say are coming. One is they say the Secretary of
Education, that's Cardona. Right shall within a year develop, quote, unquote, resources, policies and guidance regarding AI, no laws, they're its policies, its sort of recommendations and guidance, and the things that they say it's about, right. It's about safe, responsible, and non discriminatory uses of AI, especially vulnerable and underserved communities and in consultation with stakeholders,
very high level language. But if you follow what this Department of Ed has been doing, they've care a lot about underserved communities, they care a lot about equity they care a lot about so I actually read that as you're going to see something from the Department of Ed basically saying, Here are some ways that we recommend using AI in a way that is equitable, I really think that's going to be the core of what comes out
there. I don't think it's going to be way deep into the weeds about privacy or way deep into the weeds about pedagogy or anything like that. I think it's going to be really about like, here's how to make sure you are not using AI in a way, yes, safe, but especially non discriminatory. So I think we should really keep that in mind. That's one of the two deliverables. The other, as you've been saying, is an AI
toolkit. And that AI toolkit is specifically coming out of the office of education technologies AI report that came out in May, we talked to Christina Ishmael, who's the head of the Office of Educational Technologies director there about that report right before it came out. And if you look at that report, that report is probably the best document that we have as an education community about where things are really gonna go. And yes, it's still pretty high
level. It's definitely not technical, but it's really about educators. I really think that most of the main things in that document are about they haven't you know, recommendation five, inform and involve educators Foundation, one center people, parents, educators, and students. They talk about one of their guiding questions being about what's our education system that leverages automation to advance learning while protecting and centering human
agency. It's, I think what that toolkit is going to really be about is how to do all sorts of amazing things with with AI and automation, while maintaining contexts that use a lot and while maintaining human agency.
And all of this is to say that these still aren't the laws, these shell policies, the recommendations, but I think if you're a tech company wants to be on the right side of AI like one probably should listen to what you're saying been there At Big models are probably going to have an advantage, at least for a little while, because they're going to get that stamp of approval from the government to if the laws come down where you know that they're responsible, they should start thinking about
their own policies about privacy, and, and certainly equity. And three, I think the government is role, they see their role in education as making sure this AI revolution doesn't sort of leave people behind. I think that's a bigger part of this than I expected for sure. And I think it's actually going to be a big, big part of what comes out of the Department of Ed, that's my overall takeaway.
I love it. As I sit back, I think, first, it's important to remember, it's an executive order, which is changeable. And as you mentioned, guidelines, guidance policy and so on, it does give us a sense of direction and where the administration is. And I also have seen that the administration is bringing people in from lots of different sectors that are related to AI, to talk to them and understand this really fast moving space.
So in general, like, it's great to have this almost as, like a look inside the collective hive
brain of the government. My thought is, I'd love to just skip to the end of the book, which is the we have an AI agency that's creating coherent rules and regulations, that is working cross functionally with other departments to make sure that they're, you know, specific rules like health rules, or something like that, naturally folds into the overarching regulatory framework, and that we're not creating regulations intended for the big LLM aims.
And, like, by mistake, squashing all the innovation of open alums, or, you know, subject specific Ellen's we mentioned just briefly that, you know, the parameters that they talked about, you know, open AI doesn't even reach those. So something like Bloomberg AI system, or a proprietary AI system wouldn't be regulated as an LLM in this case. So I would love to see us
run forward with that. And I think, you know, the risk is, you put the wrong person in charge of that agency, and you have the wrong governmental priorities, that could be really, really scary and dangerous. And potentially, you know, all the dangers of AI that we could see could be, you know, either through incompetence unleashed on the world, or through actual intention could be utilized against American people. I think that that is
risky. But I think it's a lot easier to spot that risk when it's concentrated in a single branch or a single department than when it's spread out amongst a lot. The last thing I'd say is to our developers out there, people building in edtech, I think the main thing is to really, really understand that safety, security, privacy, like fine tuning your model to eliminate negative outcomes is
really essential. Even though on your product roadmap, it might be way more fun to focus on new features and capabilities. Getting that kind of safety, security, like data protection layer, really, really down will save you so much pain in the long run. And as these new regulations come up, it's hard to say will you be punished for like past development decisions you've had, it's hard to know. But definitely going forward, we're going to see more
regulation. So with that, I think we should probably wrap and head to our interview anything else you want to add? Before we do that? Alex?
I hope we added some value here. I feel like it's a pretty high level document, you have a lot of really good insights on it, you know, then, but like, I think, you know, I like your point about skip to the end of the book. Like I thought we'd have more concrete information from this. I think it's more sort of reading the tea leaves in here about where things will actually
go. And you know, but yeah, I think we should I think we should wrap I mean, a couple of interesting funding rounds this week, but we will cover them in the newsletter. Yeah, there's just a lot to chew on and think about. But you know, I think at tech companies are actually in an OK, space. When it comes to this. You mentioned the sort of like this chilling effect, or this idea that, you know, AI is a scary place, and it should be going slow. I didn't read that
as much as you did in this. I felt like it's saying, We acknowledged this as a future. That's why we rushed to get this executive order out to tell you the world that we're on top of it, and we know it's coming and now we're starting to really dive in and it was sort of putting the entire US federal government on notice that they need to think about AI very seriously. That's my main read. Yeah, awesome.
Well as these policies become practices and new developments, we will keep you updated here on Ed Tech insiders, especially The with the lens of how does this impact education and education technology? Because if it happens in ed tech, you'll hear about it here in ed tech insiders. So let's transition to the interview. Christian visor coming right up. Hello, everyone, I am so excited to have an inspiration a friend and entrepreneur, Christian buys
from learn XYZ. Thanks so much for joining tech insiders. Thanks so much for having me. We're gonna start off our conversation with just a little bit of backstory. Tell us where you come from, tell us your journey and how you ended up coming into the education learning AI space. Yeah, thank you. So it's, you know, you will hear this now after like my first 10 words
that I'm German. And I was actually I had the privilege to go through the German education system, I always like to tell the story that my wife, I mean, I, I dropped out of college, basically, after the bachelor and undergrad, essentially, right, and my wife went all the way from kindergarten to PhD. without spending a single dollar on education, you have to buy the bus ticket. And that's about it that, you know, that's the how the German system works. And
it's fabulous. I mean, it's just something that should be like this everywhere in the world, and greed. It's just something that like really inspired me in hindsight, like when I was there, you know, I kind of took it for granted. But then when I moved to the US about 10 years ago, I just realized that it's not just a given thing that everyone has access to free
education. And so a big motive that we put into our product, or why we do what we do is that we want to bring education to everyone and our product has today users from more than 160 countries, and they can access some kind of Education Forum. I guess we talk about this in a second as well, but really in a free format. And I'm really proud of that. And tell us a little bit about learn XYZ and how you actually came up with the idea for the product. Yeah, so I was working at LinkedIn
during the day, essentially. And I did start a different learning company before actually. So when I still lived in Germany, we started a seminar series, essentially about digital marketing, online marketing. And it's called Omar and it's not a largest conference, actually, in the world in the space with more than 75,000 attendees this year. It was really a big, big event.
But learning something there was like an ingrained into the product, because we taught people and so when I was working at LinkedIn, a good friend call me is a Christian. And what are you doing at LinkedIn, you should like really start a company you get in like this learning stuff you should like, teach people are crypto on web three. So some people are on not another one of those that like
trend from crypto into AI. But what we thought is that there needs to be a place where you can learn these things without getting scammed. And yeah, like just a trusted place, essentially, to really understand what is web three and crypto and blockchain What is it really about? And what are the benefits? And I still believe in many of the benefits? Unfortunately, many of the people in that ecosystem are not
those nice players. What we just heard the news yesterday that someone very famous got not his sentence, but he was what is it called? convicted. Now what is the word I forget? But we know convicted? Yes. So in a lot of users were the same way. They just didn't come for learning. They just came because they tried to scam us. And the second issue that we had, then obviously that the market was crashing. And this hypothesis of web three or crypto going mainstream didn't come through.
And then the third one was that, how do you scale content is a big problem in learning platforms. And everyone here that listens probably notices that content creation is hard and expensive. And because we didn't like our audience so much. We started to write courses about other topics. We wrote a course pretty much a year ago about GPT. And we wrote a course about quantum computing and self driving cars and just kind of these future topics. And then we even more had this issue
in front of us. We're like, how do we scale this? How can we create 10 courses a week rather than two. And then when GPD was released over the holidays, actually, we thought last year, oh, maybe we can just build a tour. So we can create these 10 courses a week. And so we started building the sewer. And then in January this year, the tour was in a form where we could type something in and we typed in Chinese New Year. And
then of course came out. And this course was so good, that we then realize this is not a tool. This is the actual product. And so the product today, and you can imagine it's like like Duolingo we like to say it's kind of a fun app, where you learn something you don't learn languages, even though people can do that, too. But you learn anything you want. Because he literally just type in what is Chinese New Year? Why is the sky
blue? What is photosynthesis or whatever, like might come to mind you can type in but even more powerful, I think is that you can get inspired what your friends are learning. So it's like the social learning experience where you have a feed. You see what your friends are doing what we promote content that is shown to you because you learn about other topics. And yeah, we launched
the app then. Now what like six weeks ago, publicly, we raised the seed round in between more than 100,000 of these little curiosities, how we like to call them actually Ben before we we started recording, you use the word that I really love and I will steal this from you. It's called Curiosity engine. So I will use this going forward and Oh, but yeah, more than 100,000 of these curiosities and on the product, and we just passed 10,000 downloads. So it's still very, very early days for us.
But we're just really excited of what is possible. With this friendly help on we have a mascot we call her Lumia URL octopus, like guides you through the product and humanizes, basically, the AI in the sense, and so you're off to the octopus, right all the courses for us and we are now not, you know, at 10 courses a week, but 1000s of courses per week. And as far as very inspiring. One
of the things that's interesting about learn XYZ is it, you make a distinction between learning and education. You know, you mentioned you were doing an education company. But I've heard you say several times, you know, it's not really about formal education, it's about learning, you can learn anything you'd like we want to make learning feel very casual and fun. And it is, I've been doing my Learn XYZ regularly, I really
enjoy it. Tell us about that philosophy of you know, education versus learning.
Yeah. So, I mean, what is learning in the end, right, like, in the end, it's to understand some kind of context that is written summarize, like explained to you and, you know, more afterwards and you know, before and so what is the path to get there, there probably many different paths and there. And we find this a little bit funny, because we are not academics in the sense. I mean, we went to college macrofauna as the same story of both teacher kids or
grandparents, for teachers. And so we also went through this, both through this education system, but we don't look at this from like, Okay, this is a method and you have to recall, and you have to, like, do these tests. But if you package knowledge up in a way that it's actually fun, that it doesn't even feel like learning, but it's like, you get inspired,
then you learn something. And you can I mean, even though we always like to say that we're in the business of making you feel good about the time that you spent, we think that you can also learn something on Instagram and Tik Tok in these places, they are learning videos
as well, right? Like, but that's kind of the project we take, we pick up something absurd, so kind of short form, content and fun, and so that when you swipe through your phone a little bit vibrating the little effects and in that way, you learn something without even knowing that you learn it. And that's for me the
goal. And that's how I've been doing I've been horrible student, but I love to, you know, like dive into topics and have the best possible setup for my camera setup here, or for lighting or for woodworking, or for making the best freakin lotta art that they can possibly make. I'm still working on this. But I like to get into things very deeply. But it doesn't feel like learning because you have so much fun in the process that that's much better than just like, you know, learning has
like a negative connotation. And it's kind of funny that I want to company's name is no, not XYZ, exactly.
I mean, it's a golden age for autodidacts right now between YouTube and Wikipedia, and although
I'm so happy that I live now, and I'm like, 30 years. Exactly. Yeah. I'd love to hear how you're thinking about the evolution of learn XYZ. Where is it going? And now that you've got all of these users engaging in generative content, and kind of going down the rabbit hole, what does the business look like? And what does the future hold? Yeah, so I heard from a competitor of ours, one sentence that I really liked, which was
we raise money. That's why this is a future me problem for the business model is, but obviously it's not right. It's more like a joke. But we have two ways how we want to monetize. So the first one will be just pure freemium features. My five year words, I have kids, my moto one, oftentimes at bedtime, when I usually read a book, he says, Can we learn something with
Lumi? And then I'm asking, What do you want to learn about and then he's like, some random fact from some dinosaur or Pokemon, or whatever I've never heard about. And so we type that in, and then we create a course about it, and he has fun with me learning together, right? It's not that you shovel your kids away and give him your phone. It's more like, bonding, kind of using technology and learning
together. And so we think that there's a great freemium use case right there where you can say, Okay, how old is your kid, five years old, okay, then the content is appropriate for five years. And then that is a good freemium feature where you can maybe spend a couple of bucks a month. But the more interesting part is, is is really our strategy is that we want to build a product that people love
and use every day. And then we monetize companies that use the same kind of learning, or learning about cybersecurity and harassment trainings and all kinds of things that accompany might have suddenly feeds more fun because they're competitive things built into the app, you know, you get will have points and leaderboards. And you see how your like peers are doing.
And it's funny for us, we didn't think of this industry when we built this because we really wanted to build a cool product that people use and love every day. And then all these companies came to us and said, Hey, we love this. Can we ingest our content that we have here? We have these like 300 page documents about topic XYZ? Can you make little curiosities of that content? And that's exactly where we want to go eventually. That is the second step because we first have so many of these
gamification items. It's kind of the Duolingo playbook, right? That is written out there. So we are taking the best ones, all of that and put that into the pot. Right. But eventually we will get there. We take these conversations with the companies, with schools with universities, we have like a huge restaurant Navy, that came to us, you know, soldiers of like, what 15,000 soldiers that army. And they're like, We have
to learn all these things. And they create these PowerPoints, and they're so boring, we would love to use your app. And so they send us example content, it's part of our product anyways, that we ingest content, we don't only just use the language models that are out there, we use, basically our own, like learning pipeline in a way where we take trusted content, and then use language models to summarize existing
content pieces. And we have access to and this is basically the product you ingest, whatever you have YouTube video, a piece of like PDF, or whatever. And then we create the curiosities based on that material. And that's also where a lot of critics say, oh, you know, your, your app will only hallucinate and it's like, all incorrect facts. No, you can actually do a lot already. And it's getting
better every day. Yeah, I will just say like two things just to jump in, that really strike me one, I remember when Expensify came out. And their value prop was expenses that don't suck, because everyone hates doing their expenses. But by the way, their monetization was for the finance team, because people were finally doing their expenses. And it's kind of like learning that doesn't suck is really what you're going for. And you've given me all these marketing headlines to the
event. And, you know, I think there's so many topics that actually, if you let curiosity drive it, they're actually fascinating topics. But when it's all pre programmed, linear track, long
form, it really sucks. And then the second thing that I was thinking about is just that you are the first company that I had met with, where I really understood this idea of essentially a bot hierarchy where you have the prompts, and then that prompt actually sends out like somebody's gotta go make the visuals, okay, you bought, you're gonna make the visuals, somebody's got to make the kind of framing and the context and the structure you bought, we'll do kind of the
organization of it, and then you bought will, like, come up with the content and topics. And so this, I think your architecture is just really fascinating that in a matter of seconds, and I know you've also been working on how do we make this as fast and instantaneous as possible. But in a matter of seconds, it's not just, you know, chat, GBT, I type in a question. And then I get the response back. It's full fledged courses with each bot
kind of playing their role. And I think it's a great analogy for how others who are developing with AI should think about their product. AI is not like the single prompts tool at its best. It's multi prompt, all with the same genesis of you know, in your case, learner curiosity. So is that really fascinating? I think you summarized this really well, right? In the end, you have a goal in mind, you
want to have this course? And so what are all the steps that you need to do to get there, and so you can just ask GPT, create me a course on topic, and then you'll get the output. And eventually, probably there might even be images. But the whole point is what you explained that you combine these different models together. When you have the image generation, you do quality control, by the way as
well, huge factor. And you can basically I can give a language model a piece of content, say, hey, how factually correct is this content score me from one to 10? If it's not high enough, you rerun it until it's good enough. And so the oldest things that you can do and, and one thing for us is really important here. AI is just something that makes all of this possible, right? It's not we also not an AI company. AI is a commodity like a database or electricity,
right? Like because you run some other companies don't say I'm a database company, because you use some Postgres database, or whatever, right. And so, AI and language models are commodities that enable certain business models that were just not possible before. And AI is not just finishing a chatbot into your product, because that's not inspiring, because most people also don't know what they should
ask. My favorite joke is always like, within the Army, no joke, but like, like a lot of people that try GPT and asked to write them a poem. And then they giggle. And then they never use it again, because they don't know what they should ask. And that's the big part of what I mentioned earlier, this inspiration, there's this feed that you open the app, and you immediately have something to
tap on. Yes, I use it all the time, because I'm running around the life in a curious matter and looking at things and I'm like, What is this what does this mean? And that's why also be by the way, we see a lot of people having us explain them the news, as bad as all of these incidents in Israel, for example, right now, right? Like, a lot of people use us and then they create causes of what is a kibbutz for example, like, I when I saw this course popping up, I was like, huh, yeah, what
is the kibbutz? I never thought about it. You know, you just read it. And you think this is like some kind of village or something, but like to learn these things. This is what we are perfect for general knowledge. I remember fondly.
You mentioned that you're not an AI company. But you do use AI as sort of a core technology and one of the patterns we've seen with a lot of edtech companies that do use AI is it's really about sort of changing the form of the knowledge or the content. One thing that strikes me, you mentioned sort of the Duolingo playbook, you have the potential to take, as you say, a 300 page, internal corporate document and
turn it into a gamified. app based mobile first, you know, sort of Duolingo, like with with quizzes, and swiping, tell us a little bit about how you are, a lot of people want to be the
next Duolingo. Right, a lot of people tell us a little bit about how you've thought about creating an app, because I've been using it and it really does feel incredibly friendly, and light, and, and fun, and lots of streaks and all of the gamification, tell our listeners a little bit about the form factor and how you make it so compelling and fun.
Yeah, I think we have a very intelligent team, they're all designer was our first hire Carmen. And she just, it's just incredible. You know, and we do so much research with our users together. This is, by the way, the beauty of building a consumer product, you have user base, you email the most active ones, and they're more than happy to jump on the phone
with you. And pretty much everything that you see in the product has been kind of run with our users and discussed and we have emails, and we go back and forth with them. And I just really believe into this. You know, like people talk about MVP, but you cannot release MVPs anymore, you know, the competition is just too high out there. That's why we'd like to talk about the M DP, which is like the minimum delight for product. And delight is something that is so so
important for us. And so that's why every button, every interaction, every swipe, any of those little animation that come is crafted with a lot of love, because that's what makes the difference between a really great experience and a not so great experience, right? Especially so early on. I mean, what we were like five weeks old now, essentially, with like, out
there. And so there's so many things that we still have on the roadmap, but those are the things that make people come back, right, like they open the app, and everything just looks really smooth. And honestly, this is what I learned also, and my co founder as well, right? We spent 10 years in Silicon Valley in a lot of interesting big companies. And that's the beauty when you work here that you learn these things. And we try
to build the best product. Now. Last question, you spend a lot of time in Silicon Valley, you've spent a lot of time in different communities. But you've never really been in ed tech before. And you're a crossover company. And I'm just curious, like, how welcoming or how challenging has it been to break into tech circles? You know, we often think of ourselves as this, like, embracing and like
collaborative community. But I also know, people are quite prickly right now around, you know, AI in trends, new ed tech founders, and how have you navigated that? Yeah, Ben, you're one of the main reasons why we are in this industry. And you are the one that connects us. So so big, thank you for that. I would say I remember when we went to GSV Summit, and this big conference in San Diego in April, right like this is when we just started essentially, to build
this product. We're like, there's web prototype we showed around to people with these little cushy animals that we gave to people all the octopus, we had little characters made lesson plays with his Yes. With Lumi Yeah, exactly. And everyone was so nice, especially, you know, for us coming out of this crypto ecosystem where people were mostly, unfortunately, not nice. You know, they were like really greedy, only, like it was all about like, where do I get them next door and money? And how do
I get rich quick. And people that work in the learning space are not like that. And we are also not like that. And so it was an instant match with that kind of cultural mindset in a way. We however, I would say that it seemed that a lot of the ad tech, people were a little bit afraid or skeptical about AI. So for example, our C drawn out a singer ad tech investor invested in us. I mean, the VC is a lot of ad tech founders. So Andrew Barnes, for example, like the founder of go one was one of
them, right? Like we have some of the kind of influential people that are founders in this space that also realize something is happening here that we found a little bit surprising that ad tech funds were not more behind this because they have such a hard time over the last two decades with ad tech investments that this is the biggest opportunity for them in my eyes. I mean, Sam Altman always says when he's being asked, What's the biggest opportunity for AI? And he always says learning. Yeah, as
well. So we stand fully behind that mission. Well, and he says learning but he doesn't say education. And I find that really, really interesting. And when you it's called edtech education technology. And there's just like a learn cynicism that I think the investor community has, and like I consider myself
part of that community. And there's like, you know, great products don't lead necessarily to great businesses, because there's all this cynicism about how broken the purchasing system is what I really find refreshing about you're leaning into learning Tech is one like a great engaging, you're using the actual rules of good technology for learning technology, which is, let's build a product that
the users love. And let's build a business from there, rather than let's build a product that is compliant with the, like purchasing mechanisms of an outdated system. And then hope someday we can figure out a way to make you know, the learner love it. And this separation between I have to interrupt you on for one second, because one story comes to mind. We were when we were at GSV Summit, you know, we were just so new show
this to people. And a lot of people said, oh, you should go to Texas, you know, there's this program for 11th graders. And in those schools, you can sell immediately. It's like amazing. And even there we were live for I don't know, like three weeks with our beta with a couple of 1000 users. But we had users from 100 plus countries already. I was like, no, no, I don't go to Texas. This is a much bigger opportunity. And you know, it just reminds me of like, from which angle? Do you take this
right? Like, do you search for how can I buy or sell to the specific audience and a specific statement? Maybe already? How do I build a product that people offers? And then figure it out from there? Yeah, well, kudos to you for being such an inspiration to so many of us bringing first principles thinking, bringing a learner centered approach truly, and this curiosity engine that you've built, you know, I just learned about the Beatles earlier this morning, because they came out with a new AI
song. And so my kids wanted to learn a little bit more about it. And then we did the quiz together. And, you know, we learned that Ringo was an original member of the band, that was a shocker. So you know, it's those types of things that I think represent really what's possible in the future of learning. So thank you for giving us a glimpse of that learning future Christian. And if people want to find out more about learn XYZ, what's the best way for them to connect or find
out more? Yeah, you could just go to law, not XYZ and download the app from there or can connect with me on LinkedIn and we can message and you can easily find it everywhere. Wonderful. Well, thanks so much for joining us today at Tech insiders. Christian buys as CEO, co founder learn XYZ thanks so much. Thank you for having me.
Thanks for listening to this episode of edtech insiders. If you liked the podcast, remember to rate it and share it with others in the edtech community. For those who want even more Edtech Insider, subscribe to the free ed tech insiders newsletter on substack.