Honorlock's Evolution: AI and Academic Proctoring with Paul Morales and Jordan Adair - podcast episode cover

Honorlock's Evolution: AI and Academic Proctoring with Paul Morales and Jordan Adair

Oct 30, 202355 minSeason 7Ep. 16
--:--
--:--
Listen in podcast apps:

Episode description

Send us a text

Honorlock is an  online test proctoring service that combines AI software like computer vision with decision making of live test proctors.

Paul Morales leads Honorlock’s software and operations technology initiatives with a “security-first” approach. Paul believes that Information Security and Privacy are of the utmost importance in both professional and personal environments. 

Jordan Adair is the VP of Product at Honorlock. Jordan began his career in education as an elementary and middle school teacher. After transitioning into educational technology, he became focused on delivering products designed to empower instructors and improve the student experience.

Recommended Resources:
Midjourney
Khanmigo by Sal Khan
Lex Fridman podcast
Reddit
Hacker News

Transcript

Alexander Sarlin

Welcome to Season Seven of Edtech Insiders, the show where we cover the education technology industry in depth every week and speak to thought leaders, founders, investors, and operators in the edtech field. I'm Alex Sarlin.

Ben Kornell

And I'm Ben Kornell. And we're both ad tech leaders with experience ranging from startups all the way to big tech. We're passionate about connecting you with what's happening in edtech around the globe.

Alexander Sarlin

Thanks for listening. And if you like the podcast, please subscribe and leave us a review.

Ben Kornell

For our newsletter events and resources go to edtechinsiders.org. Here's the show.

Alexander Sarlin

Honorlock is an online test proctoring service that combines AI software like computer vision with the decision making of live test proctors. Paul Morales leads Honorlock's software and operations technology initiatives with a security first approach. Paul believes that information security and privacy are of the utmost importance in both professional and personal environments.

Jordan Adair, who we spoke to last year is the VP of product at Honorlock, Jordan began his career in education as an elementary and middle school teacher. After transitioning into educational technology, he became focused on delivering products designed to empower instructors and improve the student experience. Paul Morales Jordan Adair, welcome to Edtech insiders.

Paul Morales

Thanks for having me Alex.

Jordan Adair

Absolutely. Happy to be here.

Alexander Sarlin

It's so great to see both of you. We're going to be talking today about academic integrity, but also security, and AI and all sorts of really neat things that are happening at honor lock. longtime listeners will remember that we spoke with Jordan last year about the academic integrity product, which is about combining sort of AI detection and human intelligence for online proctoring. And that was before the generative AI revolution that we're all living through in the time we've

spoken. AI is now sort of everywhere. So first off, for those who hadn't heard that earlier episode, remind our listeners about what honor lock is all about? What do you do? And how have you always used AI in your product?

Jordan Adair

Absolutely. So we are a hybrid model online proctoring company, which means that AI essentially augments our human Proctor's right feeds them signals, and then the humans can make the final decisions on what to do and how to respond to those AI signals, those AI flags. And that's the very high level business model that we

utilize. Because we believe that the human approach, having that human be the decision maker is still vital to eliminate a bunch of noise, false flags and things that would come with an AI only solution. But AI is a huge part of what we do and a major driver in the business.

Alexander Sarlin

Paul, you're on the security side of the world, among other things. And one of the things I find interesting about otter luck is that it puts together a bunch of different kinds of technical tools to do this kind of proctoring. There's computer vision involved. There's browser locks, there's all sorts of things to detect integrity, but you know, potential breaches of integrity. But honor lock doesn't itself say, Oh, we caught you, the student is

cheating. It actually just notices what's happening and alerts instructors humans in the loop, as you're saying, Jordan about how it all works. So tell us about how the security of this works. What does under lock do technologically that can surface those kinds of potential breaches?

Paul Morales

Yeah, absolutely. So I think that the approach that we primarily take is using what we'd like to call defense in depth, right, we don't have one single point, that we're checking for something, we're checking for a multitude of different ways to detect when there's lack of integrity and an

assessment. I think that anybody that's been following what's been happening lately, or what's been happening for years, and know that when you have a motivated, exam taker that wants to do something that's dishonest, they're going to try and find a way. And we know that while most folks are honest, we're trying to level that playing field. And how we do that is using different methods to identify this type of

behavior. And like you said, report it to the person delivering the assessment, so they can make a final call there. So there's a ton of different types of technology, including, you know, noticing have multiple people in the room detecting multiple devices that are being used, usage of online

cheat sites. So all of these things kind of come together to provide this defense in depth to make sure that we're, you know, we're keeping the exam integrity in place and making a secure learning environment.

Alexander Sarlin

It's a really intriguing model of sort of putting together high level pretty complex technology, like computer vision and other things. But then having, you know, educators be really, really core to the decision making. And it's a model that I think you know, we're all trying to find the right balance of humanity and tech analogy in

this current era. So since we've last talked and you know, talk of the town right now is all of this new AI technology, the church ABTS, the generative AI, they're laying chain, all this stuff is starting to happen now. And that launch right in the middle of last academic year, which put a lot of people's plagiarism policies or, you know, basically, their academic integrity policy is way up in

the air. So how has an interlock, started to address this totally new world that we're all finding ourselves in?

Jordan Adair

Well, I think to back up a little bit. And also to add on to what Paul was responding to you earlier, within that last year, there's a lot that we've done both to address the generative AI revolution, but also just general improvement to how our AI works using a lot more of a feedback loop coming from our end user, right better understanding how they're using the results that our Proctor's

are sending to them. And that's helped us fine tune a lot and make a ton of internal efficiencies, which then result in more accurate results, less time people need to spend reviewing those videos, all those things people are looking for. But on the topic of generative AI, we've really doubled down on our ability to prevent the use of it during and pre exam, instead of focusing on the post exam detection, that's a game that we don't think is a

winning game. And a lot of the stories and data that's coming out now kind of proves that that if you're trying to detect AI generated content after the fact, you are truly chasing your tail, I mean, open AI itself essentially shut down their AI detection tool because they couldn't get it to work. And if they can't, I can't imagine anyone else is going to be too effective either. So we've really focused in on some of

those features. You kind of hinted at before, like our browser guard, how do we keep students within the text window? How do we better prevent, copy and paste, you know, and we have a bunch of statistics to kind of show how often those things happen. Like as an example, about 13% of students try to navigate outside of the exam window, they're prevented by browser guard, about 3% will try to copy and paste during a test. We've also expanded and really enhanced our extension blocking

capabilities. So now we're keeping an eye out for AI related extensions, GPT related extensions, also things like transcript and smartbook, which are very common, we see those 10 times more regularly than we do things like Jaggi BQ, extensions. So that's really been our focus is double down there. And also make sure that we're appropriately addressing the use of mobile phones. So as an example, we trained our own model to recognize the apple

handoff icon. So now, if a student's using another device logged in with the same Apple ID, our Proctor's will get alerted. And we can address that situation, right. And that's a custom model that we actually built nothing out of the box. So that's just an example of kind of where we're really focusing our efforts for when you want to block ai t. Right. I do want to preface that to say that the AI is not like the boogeyman, and there are scenarios where you want to give your students

access to it. But if you need to block it, we want to be there to give you those tools

Paul Morales

into a library a little bit further on that is the fact that when we sit around and we look at the future of education, obviously we are in the camp of that we don't think AI is going anywhere, right? I think we're just going to see more and more and better implementation of it. Obviously, there's a lot of controls that need to go into place. But another thing we've really focused on is sharing our learnings as we're coming across

them. So one of the biggest challenges that we had very similar to what we had during the early COVID era, where folks were like how do we deliver assessments remotely, you know, this quickly. And that was part of what really helped otter lock really come into the forefront in the spotlight is this is nobody really knows, right? We're all trying to figure this

out real time. So what we have focused on as a company is as we're learning things we're sharing, because we feel like instead of just keeping things behind closed doors and saying, you know, we're not going to share how we came to this conclusion or what we're thinking, we feel like we're valued more as a partner, if, as we're learning, we're talking about and sharing,

Alexander Sarlin

I can imagine I mean, I really admire that sort of ecosystem type of thinking where you and the schools and the AI companies and everybody are sort of figuring this out together. And they're not sort of oriented against each other listeners of this show will know that I also don't think that AI is the Boogeyman. And I love that metaphor. And we're going to continue to learn how to use it appropriately and effectively and teach students

how to use it. But it's very interesting to hear the focus on sort of live AI detection or error detection within the assessment window. And sort of saying that trying to take a submission that's been given by a student and figure out if there was aI involved is virtually impossible. There still are companies you know, there are companies trying to chase that but It sounds like you have decided as a company that is really out of scope and

potentially impossible. So instead, it's really about that live window. So I'd love to hear a little bit more about that live window. Because one of the things that's come up a lot recently is the idea of, you know, in an AI era where we can't detect AI artifacts, is there more purpose in having oral exams, and, you know, the types of live assessments that we sort of usually attributed to an older era? Jordan, let me start with you with that one.

Jordan Adair

Yeah. So we've seen some interesting things, talking to a lot of faculty, instructors, professors on what they're doing and how they're adapting, I think, going to a model where you do have some more oral presentations, or more unique styles of assessment is certainly a way to go. And we have seen people also just incorporate AI into the assessment process in general,

right, like some examples. We've had professors that are instructing the students to use GPT, or Bard or whatever it might be, and then having them submit the full transcript. And they're grading the students on the prompts that they're utilizing, instead of the outcome, right, which is a really cool idea, having them debate against GPT for some set number of rounds on a topic, like these are all things that

some professors are trying. But there's also a limited knowledge, just in general, across higher ed, when we've done our webinars and ask questions about, you know, how familiar are you with AI tools, it's generally mildly familiar, it's about the response we get. And we've also seen that on the whole schools aren't have not yet really laid a roadmap or set up an infrastructure for professors to work in, because it's so new, and we haven't had a chance to adopt an institution

wide policy. So there's still a lot of trial and error, I think, happening at the chorus level on how do we incorporate it? How do we block it when we need to? And what's the right, you know, happy marriage between those two pads.

Alexander Sarlin

Your note, imagine, you know, we're hearing statistics here, like 3% of students copy and paste and 13% try to go to their browser window. These are baselines that nobody else has, right? I mean, if you professors do not know what level of cheating or different kinds of tools that students are using, they don't

know, the baselines. But what they really don't know is some of the things that you've been talking about here, like these browser extensions that use Chet GPT, or these tools that are sort of being developed at a very rapid pace. So you're really at the frontlines of what is happening in AI, and how students who are very motivated in some cases to find the best path to using this are finding all sorts of new tools and new

ways. So what role do you feel like unlock is playing and sort of, you know, seeing what students are doing, and telling the world but without trying to make it, you know, hyper punitive? Because I know your philosophy is not just the student found another cheat, get them in trouble. It's like, wow, look at all these tools that are out there. How do we keep up with this? How do we help higher ed adapt to this new world? What role do you play?

Jordan Adair

So like Paul mentioned before, we're trying to share what we're learning. And so we're doing that in a few different ways. We've done some webinars that are have been our own, you know, on our lock hosted webinars, we've also spoke with partners of ours on webinars on the topic of AI, and share some of that similar data that I mentioned earlier to try to, and also not just the data on what we see from students, but those unique assessment ideas that we're hearing from

instructors as well. So that's one element. And then we've also built a new dashboard that is available to the key admins, that gives them the information that I've shared, how many of your students are trying to navigate outside of the test window? What are the most common extensions you're seeing at your institution? And that actually varies pretty wildly across the different schools, right, we've

seen somewhere chat. GPT is the leader and students are trying to use it and adopted faster, whereas others, it's hardly on the radar. And you know, students are focused way more on transcripts or other kind of pure cheating extensions for lack of a better term. So yeah, we're trying to share the data openly in the dashboard, and then just get out front and try to speak on the topic and share what we know. Right? Well, we're not trying to make it sound like we're the experts are either,

right. But we have a lot of really interesting information that is unique based on our position in the market.

Paul Morales

Yeah. And I think the Jordan's point, the more that we share that data, it empowers our users to make informed decisions, right? So understand, why is it that these folks taken assessments are going through such great lengths to try and circumvent the controls that are in the

assessment? And that might prove to provide some very helpful insight to the customers who may say, Well, you know, maybe I'm doing something wrong with how I'm delivering this content, maybe I need to think outside of the box or take a different approach. And we've always taken this stance of not claiming to know what the answer to that is, but providing the tools that help drive that information and make those informed decisions.

Alexander Sarlin

Yeah, that makes a lot of sense. It's just such a complex topic. And you know, Quizlet just put out a survey about AI usage that basically said that, to their surprise, they were seeing that educators were actually more excited and sort of, at least in their particular survey, they weren't, they were using it more and more excited about what it could do in the classroom and students were in, I think this was K 12. So not entirely in higher ed. But that was a

surprise to me too. But at the same time, the only thing I've heard from anybody who's school age about this stuff is people cheat with it. And I don't want to cheat. And it's the strangest thing, when you mentioned that, you know, different universities have different, you know, tools that they use this sort of like folk wisdom, or, you know, communities of, I'm sure one student uses it and tells their roommate, who tells their, you know, frat, or their soccer team

or whatever. And suddenly, you know, there's these networks, but it's really interesting to think that I would have assumed to that students are going to be sort of on the cutting edge of finding the new tools, and they are, but at the same time instructors are more excited about this, especially, you know, at the higher ed level, where they have a reputation for being a little stodgy than you might expect is Have you seen that with your partners.

Paul Morales

So I think that based on what we're hearing, and I've heard similar things, as far as the folks that I know, family, friends, school age, folks, that hype doesn't seem to be on the same level as folks that are in you know, the work environment or in higher ed environments. And, and this is just my own hypothesis around that. But I think what people are really realizing is what a time saver, this can be in certain situations, especially folks that are strapped for

resources. And I'll definitely, you know, call Jordan out to talk about some of his experience, being an educator and like some of the time constraints that you run into, and how something like this may have helped. But I think that's probably driving a lot of

Jordan Adair

it. Yeah, and talking with a lot of professors. And thinking back to some of my own experiences. There's so many scalability challenges today in education that AI can help solve. So you know, of course, you're teaching an undergrad course that has 500 students in it, that's really hard to do any type of interactive assignment with that quantity of students. Now you can assign your class to debate

against Jack GPT. And maybe give you some summary that gets graded, you could potentially could leverage AI to help you with your grading, there's, you know, the problem of leaks content is a major issue all across education, we've seen the quantity of lead content increase from about 12%, back in 2020, to close to 36%. Today, so like a 3x growth in lead questions. You can use GPT to create variations of your questions. And now you have a larger item bank that isn't

leaked online, right? So there's all these problems that are really difficult to solve that AI kind of gives us a light at the end of the tunnel. So I can absolutely see why professors, faculty are more excited than students, right? Because they're these are problems that are very real to them that had been long lasting. And there's potentially something that can help us with a solution. Now,

Alexander Sarlin

it's been really interesting to see how many of the AI tools and startups in education are educator facing and they're all about that time sit, shut up. And these generative AI models are incredibly good at things like making rubrics or making variations or making glossaries or you know, anything that has a

remotely fixed structure. It's incredible at so it's funny to hear that example of professors using or at least in theory, using AI to make variants of their questions so that they could give different ones to different students in real time, or avoid giving out the same test as last year where the students you know, from last year gives it to their people.

It's really a funny, I like I don't want to see it as an arms race, because I don't think that's sort of the right metaphor, but it's really interesting to see how both the instructors and the students are getting these augmented abilities in parallel. Paul, I'd love to ask you a little bit about math. This is something moving very quickly in chat. GBT calculators, famously, you know, solve math problems. And then they people weren't allowed to

use them on math tests. And, and, you know, Google and Wikipedia made, you know, searching something that you could do. So it was it there. At first people were saying, you know, never use Wikipedia as a citation, because it's not

trustworthy. You know, we're getting to a place where AI is going from something seen as you know, not necessarily trustworthy and hallucinating and bad at math to something that is not bad at math, because it can incorporate Python into it and can do really, really sophisticated thinking very quickly. So when you think about these sort of metaphors of, you know, when calculators started, people didn't know what to do

with them in classrooms. When Google and Wikipedia started, people didn't know what to do now and starting, do you see parallels between those times and now? Oh,

Paul Morales

absolutely. Absolutely. And I think that it really comes back to the type of instructor or the type of assessment type that they're trying to deliver. And well, what is it that we're actually trying to accomplish? So I would say that most folks asked or they leave high school University, and they enter the workforce, they're very surprised about some of the things that they thought they would be using all the time and some of the things that they never use, right? So like,

when's it? When's the last time any of us had to use case for a graphing calculator, but most of us would have a use case. So use something like Excel to do some basic calculations on on a project or budget or whatever the case may be. So I think that we really have to step back and focus on what are the actual learning outcomes that we want

to have at the end? And determine are the ways that we're delivering assessment striving towards that, and what we've typically heard, and what Jordan and I like to riff about a lot is this thought of like, you know, authentic assessment, right. And I think that's what really is drawn everybody back, because generative AI has really put that on its head and made

that very challenging. Where that was always the answer that was always what was thrown out is like, well, you know, delivering an online assessment, that that's not the solution. The solution is authentic assessment, doing something in an essay form, or whatever the case may be. And a lot of that has been challenged and buffed by this type of technology. So, you know, that's one of the things that I think about constantly with this is like, what is what is it that we're

really driving towards? And are we actually setting folks up to have the most success in the real world, right, because that's what we're, that's what I think most of us are looking for.

Jordan Adair

And then maybe to add on to that, too, to me, and I'm hearing a little bit of this from some of the higher ed world as well, the challenge right now is to figure out, I think, on the whole, most people are of the mindset that AI is here to stay, and we need to learn how to work with it and leverage it.

But now the problem is, where does the human component fit in the assessment process, like drawing that line is, I think, a challenge for both sides, both the student side of things and the faculty, like, imagine a world where I have 500 essays submitted to me, and I want to get feedback using AI. Right. So AI is going to be capable of giving students a lot of feedback on their writing. But at some point, my eyes do need to see this. And I do need to make some type of determination,

right? So what kind of where that line exists is the challenge that will be a tug of war, like figuring out what that sweet spot is, I think over the next couple of years, but that'll be interesting to see that evolve

Alexander Sarlin

100% It's just such a wacky time I just every time I begin to think about this and talk to people who are really at the cutting edge of it like yourselves, you know, new ideas, just turn the whole thing on its head again. And again,

it's so interesting. I mean, one of the things that we saw in the Google and Wikipedia era was that there was sort of a knee jerk reaction early on and said, you know, you cannot cite Wikipedia or your don't search, you have to actually cite all your sources and use the library

and all of that. Do you think that in this, there will be a similar knee jerk reaction, and certainly AI tools will be whitelisted and others will be blacklisted, and some will be sort of acceptable AI and others will be not.

Jordan Adair

So interestingly, we just pulled an audience the other day, I think it was Tuesday of this week to earlier this week to ask around citations. And one of the things that popped up was there are already standards in place now to cite, you know, generated AI. So like MLA format, how do you cite using chat GPT, there's a standard for it. And there's a format to utilize. So that side of things, the citation side of things has evolved quickly and understands that all right,

people are going to use it. So let's get some standards in place that allow for it. Because in the end, you would much prefer your students cite it and say that it was used versus tried to keep it under wraps and keep it a secret, right. So I don't have an opinion necessarily on where that will draw, look where a line might be drawn on what's accepted, and what isn't Paul perhaps made, but I do definitely see and know that the citation element of it is here now.

Paul Morales

So I think that and Jordan already kind of alluded to it with folks being actually graded upon their prompts. So I don't see it as a typical citation, because I move based on what I know right now, and obviously, opinions and views change. But based of what I understand right now, and where the technology is currently, especially with large language models that are generating based on a training set, it's going to be very challenging to cite that because you're not actually citing a

thing, right? This is you can say this thing assisted me in generating this content, but it's not going to be traditional to where you're citing an author or whatever the case may be. So what I could see is, and there's actually I believe, it's as recent as April, the US Patent Office came out with if any, their statement on the topic is if any works come in that are to be patented or copyrighted, and they had a use of AI. They're asking for that to be disclosed.

And in that disclosure, what they're also asking for is just a reference to the human interaction with that work. Right. So that showed me that industries are moving towards identifying that, like AI has been used, not to the effect of which AI how that AI it's more. So what do you do to manipulate that work? Because at the end of the day, when you're submitting a work, if you have any pride in your work, you should be it's the Jordans Jordans point, right? humanize should pass over

it. So you should be using this as a tool. And then if you want to submit it as your own work, you just acknowledge that I use this tool. But do I think that that's going to last long term? Probably not. There's probably folks that started asking for that when you were typing up an essay using a typewriter versus a word processor. None of us say, hey, Microsoft Word helped me with this or Clippy, you know, so I think it's something it's in flux, it's always going to be changing.

Alexander Sarlin

Yeah, and I mean, strikes me as I hear you talk about this, both of you that, you know, a citation of a published work a book, or a journal article is something that you can go look up a citation of, you know, I asked Chet GVT, this question, and it said, this, you know, if you don't actually capture that, right, literally have the text or a screenshot, then nobody can

recreate that. So it has a very different sort of underlying nature to it, it's really interesting to hear that that the human element is a big part of it. I have a follow up question for you, Paul, which you mentioned authentic assessment before, this is a topic that I happen to really, really love. It's a big instructional design, you know,

concept. And when you talk about the idea of which I totally agree with that, students, you know, think they're gonna use graphing calculators in their jobs, and they almost never are, but they don't realize that how many, you know, Office tools that they will be using and things like, you know, Marketo,

or all sorts of things. It strikes me that, that this gen AI is potentially one of these bridging technologies, it's something that we know is going to be used in work, and is increasingly being used in school as well. I'm curious how you see that, like, Isn't arguably more authentic, to ask people to use to, you know, feel free to use the internet and chat up to you to answer a complicated question, because that's certainly how they would answer it, you know, when they're in their first job?

Yeah, yeah.

Paul Morales

So I would say, that's a hard one for me, because I truly do think it depends. So I'll give you a

counter example to that. I think that in a situation where somebody is doing graphic design, or they're doing creative generation of work for marketing, obviously, with what we can see right now, we do believe that's going to play AI is going to play a heavy role in that type of job almost to the fact that you're going to if you're not familiar with these types of technologies, at a certain point, you may be disqualifying yourself from participating in certain roles.

While on the inverse of that I see in certain fields like the medical field, that the acceptance of this would almost be stonewalled. I don't know if it would, I can't imagine a day where, you know, you would only be seeing a AI physician, right? I think somebody would have to

look that over. But we're already starting to see, I believe they were saying there was a form of cancer, I apologize for not having the the explicit version, but there was a form of cancer where they figured out that this AI was actually picking up and detecting this at a very similar level to the current technology.

So I also see used to be assisted, I just think that it's going to certain industries, and certain areas of study are going to require much more human sign off and verification that I've actually gone gone over this because there's a lot more at stake, looking over that CAT scan, or X ray, versus that marketing email or that blog post.

Alexander Sarlin

Great point. And you both mentioned that this is, you know, subject dependent that, you know, it depends on what you're assessing, is it a medical exam, or a math test or law exam or an art portfolio? Jordan, let me ask you this, but I'd love to hear both of you. You know, AI tools can do such higher order thinking than I think we expected and so much more creative work than we expected by 2023. That I think

it's really shuffling a lot. A lot of people's perceptions about you know, where the human stops and where the machine

begins. As you as you were saying, Do you see a world in which like majors like art majors, fine art or journalism majors where you know, your work is the output and if AI is writing three quarters of the article that might be realistic, but it also doesn't show you know how to write an article like or English like are there certain majors that are going to be really very, very different in terms of what kind of AI is

used? If you submit an art assignment that's literally made entirely by AI with one problem? What have you actually done versus you know, building a skill? It's such a complicated tool. Talk to us about the different majors and what you're seeing in the field.

Jordan Adair

I think that it comes down to there's going to be a variety of skills that you're going to assess depending on the route that the student takes If the students taking the GPT route, and they are, they're using other some other application to generate art, let's say as an example, then their ability to get to an output that is amazing is really a test of their skill in working with the AI, which is a skill in

itself, right? By prompting, getting one prompt and getting a picture spit out, it is far less in quality compared to the person that gives the details of exactly what they're expecting. And then iterates on that prompt, three, four, or five, six times to get to an output. And as someone who's evaluating those things, I think there's going to be an kind of an adaptation period where we need to learn how to assess AI generated content better, right?

If I'm looking at two pictures, I should, at least in the future, be able to tell this one was generated with very little care versus this one, obviously went through a ton of effort and steps and this person is knowledgeable in their work. Right. So that is a skill set that will become part of the

assessment process. And and then there's a student who maybe chooses not to use AI at all, who will still have a very unique style, right, there is a uniqueness to the human work that I don't think it's going to be lost anytime real soon. That That in itself, right, it's being you're grading them on their actual physical ability. So that's kind of my thoughts. I don't have an idea on which majors in terms of how that will apply in different scenarios.

But that, to me is kind of the general core concept is your skill set may exist within the AI world, and that is one of your primary skills. And that's okay. Right?

Alexander Sarlin

I love that point of splitting, AI assisted, you know, fill in the blank Skill versus the core skill without AI. They're totally different. Paul, what do you think about this idea of, you know, different subjects really needing to address AI very differently.

Paul Morales

I think that the easiest example, and I think we keep coming back to it, it's like those art and creative types of majors and, and how this may that may affect them. If you think back quite a bit, you know, if you were entering, let's just say you're going for a major, possibly photography or something to that effect. A while back, you may have started

on a film camera. And I think, I would say up until probably, and I haven't looked at this recently, but I know I've had friends that have gone and done local classes, and they still start them on film cameras. And

why is that? And I think what the purpose for that is because it shows the really basic mechanisms of a camera and what makes photography now aren't there, there's a multitude of tools out there, Lightroom, Photoshop, all these all these different tools that you can use to make beautiful photos and

correct your errors. But what they're trying to instill in that student is how do you come up with something is your original piece with the technology that we have, because we have not created the technology, I guess we kind of have what some of the features on some of the cell phones now where you can go back and capture the frame. But if we're speaking traditional camera technology, you're taking a photo and you've captured that

exposure. So now you're going to try to fix something that may not have been its greatest form in the beginning. So I think we're always going to have that where we're trying to teach the fundamentals. And then what we can then do is after in a perfect world after we've laid out that basis, in our photography example, we can start to introduce these tools that show how your workflows can

be far faster. An example of this may be I remember, I first got into photography, I remember looking back at some of believers, Ansel Adams that used to take some of the photos of the great parks, beautiful pieces. Then I came across Trey Ratcliff, who started to work with a lot of HDR work. And I thought to me, these images are beautiful. But I talked to diehard photography folks, and they say, that's not real photography. He's just tweaking things and making it look

extreme. But it's in the eye of the beholder. So if you're coming out and you're going after that artistic or a marketing degree, whatever the case may be, what is your ultimate goal? Is that about getting the most clicks, right? And what drives that?

Alexander Sarlin

Exactly? I think that speaks to the point about, you know, we'll need to know how to assess AI generated work as you're saying, Jordan, and how to assess you know, that pure or, you know, real, what concept of work without the augmentations in totally different ways. I just, I hope we're prepared for that as a society because it is very, very

tricky, complex stuff. I mean, you can easily imagine professors on either side, right one saying, like the early adopters, you're mentioning saying, Yeah, use ChaCha btw and tell me how you're using it. And I'll give you grades on your actual output as well as your GPT skills like I get it. And the other thing GPT is, you know, if you use that you're never going to actually learn

the core skills. You don't even know what good is you have to learn it without that you have to learn the film camera first. Who's right I mean, as you say, it depends on the outcome. I don't know luck is really does a lot to make sure that it's on the cutting edge. of AI. And really understanding where everybody is one of the other areas of AI that is so complex and new right now is privacy, data privacy and security. I think, you know, I've talked to a lot of educators and people in

the space. And this is something that gets everybody's mind swirling a little bit. So, Paul, I know this is an area of expertise for you. First of all, how do you think about it generally? And then I'd love to ask a couple of specific security questions and see how you would address them. But how do you think about AI security and data privacy?

Paul Morales

Yeah, I think that it's one of those situations where I see that there's a lot of risk that's out there, that's what I'm always trained to look for is where does the risk exist. But I also see a lot of promise with us being able to detect malicious activity and behavior utilizing the same types of technology. So I feel like that's been the trend of Jordan and I right, saying It

depends, right. But it really does depend in this situation also is, you know, I think that the, as far as security and bad actors and those types of folks, they're going to use all the tools at their disposal. And I think utilizing those same types of tools on the prevention detection side, are going to help us provide a much better defense and protection against

these types of things. That on the privacy side, I think that we've been moving towards becoming a less and less private, I would say society over the last two decades, you know, so I remember growing up that your core focus was really in the early days of internet was staying anonymous, right? You didn't want to really show who you were. So you hid behind a screen name or something to that effect. We're now we're moving more and more towards, I want to be internet famous,

right? What everybody know who I am. So I think, you know, it's going to change like that, I think it's not going to stay one single way. It's always going to be in flux. And I think that, you know, as the technology develops, we have to kind of keep that in mind.

Alexander Sarlin

Yeah. So let's look at a sort of an academic educator use case. So it let's say a professor is writing a paper, and they have a draft version of it. And it's new research really proprietary, really interesting and sensitive. And they're, they're saying, I wish I could write this better, I'm going to upload this paper to Chet GPT. And get it to rewrite parts of it for me

and sort of make it clear. Now, I guess the question is, by uploading a copyrighted or, you know, I don't even know what you'd call it an intellectual property of the professor and of the university into this system. Is it now public? Is it now accessible to others? Like this is one of these things that I don't think anybody has a good answer to? How do you think about that.

Paul Morales

So if we use, for example, chat, GPT, as is one of the most popular examples right now, it's not, if you upload or you enter content into there, you're not assigning rights to chat GPT. So in theory, it's not a situation where they now own that work. What you're doing, though, if you have certain things enabled, is you're allowing that to become part of the data that's being studied and be referenced in order to

improve the model. So I don't think folks have to be specifically concerned about what they're inputting being referenced directly verbatim. But what they do have to be concerned with is if you have thoughts, or you have ideas that you consider to be proprietary, you're not sharing that as part of the model. It's the same as telling, you know, telling a friend about an upcoming product that hasn't been released yet.

And that friend goes and tells another friend, right, so you have to kind of approach it and think about it from that way. But what I think is very important for the users to remember, each tool has their own privacy agreements, read those things, understand those things, it's more and more important, because if you do just hand that over, they may not have the same stance as chat GBT or chat GPT may change their stance. Right? So there's a lot of nuances that you need to consider.

Alexander Sarlin

Yeah, I think Chuck to be specifically just added new warnings when you get on to it saying don't upload sensitive information, because and also other settings where you can try to tell it don't use my data as training data. But it's such a strange world. Jordan, I'm curious, in your dealing with universities, what kind of questions to the faculty or administration or even students have about security and privacy? What do they worry about when it comes to AI and security and privacy?

Jordan Adair

The first thing that comes to mind is instructors, faculty members knowing I'm going to use this tool to help me build a syllabus or helped me build some documents, but I want to be cautious of what I'm feeding into it especially with you know, FERPA and making sure no protected information about

students gets in there. So that is been the most prevalent topic of conversation is really on the faculty side of things if we're going to use this tool, what's okay for us to feed in and what is not and like Paul was saying, if I put in information about a student that has an accommodation, is there a chance that that surfaces somewhere else when you know someone that

prompts and being cautious. So that by far has been the number one area of concern, I think, across the higher ed space, but the example you gave around, you know, putting content in is another one that's popped up on the question side of things I referenced using chat GPT as a way to generate variations of your questions. There are some that have been leery of doing that because of fear that if I put the question in there, am I essentially leaking my own question out there to the web,

right. And they're not necessarily but they're certainly taking some elements of risk if they weren't taking if they didn't put it in there. Right. So it's kind of like a double edged sword is it can help me but it could also pose a risk, what's the right way? And I don't I don't know. I don't think anyone knows exactly what he does, yet.

Alexander Sarlin

And I think part of the reason why virtually nobody knows, some people might know somewhere, but is that you know, you're adding tokens, right? I mean, anything you put into a large language model is additional token or, you know, additional language that it can break into tokens and use to

train or refine the model. The question is, you know, let's say, your Professor Smith at Georgetown, and you're doing, you know, assessment on French literature, if you put your syllabus up there, and it says this, Professor Smith, Georgetown, French literature, could a student come in and say, What should I expect on Professor Smith's French Literature exam at Georgetown, and large language model might say, Hey, I have great vectors, I know exactly what you should expect, because those three

words came together in exactly what it's like, you know, it goes all the way down to the core tech to know how security but his I don't know, do either of you have reactions to that? Paul? I'd love to hear yours. No, absolutely. I

Paul Morales

think that it's made it a lot easier to to find that information and take that aggregate and drive to something. So in your example, you know, figuring out based on a syllabus or prior syllabi, right? I think that having that type of access, and being able to call back to things and understand, well, what happened in years prior, that's absolutely a capability because remember, chat GPT or open it, they don't know what you're

feeding it. So you're feeding it the same way that it thinks is just having a normal conversation with you, it's going to use that training information in order to improve the model. And if it thinks that it can provide you a better answer by answering your query with more relevant information, that's part of its model, it's going to do that. And that's without even layering in plugins. When you layer in plugins, you now you have this concern of Well, I'm not even interacting with these tools.

I'm not putting my information into chat GPT, I'm safe, it's like, well, not really, because if it's published online, it can be referenced with the plugin. So there's so many things to consider and think about there.

Alexander Sarlin

Yeah, the patent example, was a really interesting one, because I'm sure they're struggling a lot with how to keep the intellectual property in order in this absolutely nuts, new world. I mean, you compare it to Google, you don't expect when you Google something that somebody can then say, Hey, Alex, Google last week, and maybe you don't answer you. But that's really the world we potentially are in. It's very strange. Jordan, let me ask you this. And this is definitely one

for both of you. But we are at this moment, August 2023, where the world's governments, especially China, and now the EU, and we've just seen some movements from the US are starting to try to wake up to how major AI technology this is, and start to think about what needs to be regulated. And we saw the US put out recommendations recently about things like, Hey, you should have to identify if content is made by AI or whether a chatbot is really AI. It's just a

recommendation. I'm curious how you see this as people who are, you know, need to know your purpose and your Copas? What do you see coming down the pipe in the regulatory environment? Let me start with you, Jordan. I know, Paul, this is your wheelhouse. But I'm curious what you see Jordan to.

Jordan Adair

Yeah, Paul and I were actually talking about this a little bit the other day and hypothesizing about where the future might go. So one of the directions that I could see it heading is, I'm sure at some point, there will become standards that require AI chatbots to be large language models to be more open with how they are training the model and the details of how they're getting to their outputs. Now, I don't think that would be necessarily a requirement that could be enforced for every

company. But if that chat tool is to be used in certain scenarios, I could imagine that certification being required, right, let's say, if you're going to use Chad GPT as part of government work, then it needs to be certified under these standards, and have some level of openness to it. And I can certainly imagine some of those standards being applied to the higher ed world to say if we're going to use this tool, we want to understand how it was built,

how it was trained. And we are going to require, you know, some type of certification, some type of standards that it's upheld, to

Alexander Sarlin

make sense, Paul, what's on your mind when you hear that same thing? Or do you have other ideas that it might go up? That's all that well, what do you think?

Paul Morales

Yeah, no, absolutely. I think that what Jordan said is very accurate. I think that that's how we're going to see the first iterations of this happen. I'll go a little bit further to say that I'm of the belief that we shouldn't have to wait for regulation. I think that organizations that are working very closely with this, including as you start to Think of what are the right ways that we can use this. And then let's move towards putting the proper regulations into place in order

to control that. And what I mean by that is, say, for example, with honor lock, right, one of the things that we pride ourselves on is a lot of the diversity, it's within our team. And that's something that's overlooked with the creation of AI. Because it's very easy to program that type of program bias into AI technologies. And I'm not saying that we're perfect at it, we are far from

it. It's something that we're constantly thinking about in including into our products and make sure that we're not creating a bias product at the end. As far as how that would be enforced. I also agree with Jordan, I don't think that we'll come to a point where any government will say you cannot create a technology without having this stamp of approval. But I do think it's always

consumer beware, right? When Jordan and I were talking about this the other day, the examples think about is, you know, wallet, we don't have a lot of cold months in Florida, I remember when it was cool, I really needed a heater very quick, just to make sure that my wife stayed happy. And we ordered a heater on Amazon. And when this heater came in, it was the one that can get here the fastest. And it was one of the cheaper ones I came in, it worked well, it was like, Oh,

this is awesome. I'm looking around, I happen to notice, this has none of the markings on it, like UL, or CE or any of these things. And because I'm aware that those types of things are the things that are typically saying, hey, this thing won't catch your house on fire. You know, I'm able to know like, as an informed consumer like that, I should be looking for those

types of things. But we're still in a situation where you can buy things like that you can buy items come straight to your front door, and they don't have those types of protections in place, I think the same thing is going to happen with AI, I think some progressive companies are going to say, hey, there are certain standards that we're going to follow, we're going to open the curtain, show you our model, we're going to let you criticize our model, give us feedback on it, we're going to

meet certain guidelines. And then there's going to be other companies that are going to those fly by night, Sirius is going to say, hey, we do whatever we want, we have the price point that you want to meet, and we'll deliver your product for that. So it really is going to be the consumers choice on what they actually go with. But I absolutely think that that regulation is coming.

Alexander Sarlin

And if I'm understanding correctly, that we different levels of regulation, expected for higher stakes worlds like medicine, and education where you know, student information is and patient information must be protected legally already. And things like consumer applications where it's sort of, you know, Caveat emptor, somebody can put something out.

And if you don't read the terms, and conditions, you may be making a mistake could be a very different world between those two types of tools, right?

Paul Morales

Absolutely, absolutely. And we'll still need a we'll need an enforcement branch, that's going to make sure that the folks that are not falling that in those highly regulated areas, right, you can't sell, I believe it would go very bad for you if you try to sell medical devices that end up catching on fire or shopping, or, or have materials in them that they're not supposed to have. And we need to go after those types of folks. Whereas like you said, those things are

more creative. I think those are going to be more so up to the consumer, right? You, you should be educated about what you're buying and what you're entering, you're entering information into

Alexander Sarlin

whether that's also why we have all the writers and actors in Hollywood striking and artists, you know, because they're saying, oh, man, we're not going to be very well protected. Because we're not in a regulated environment. It's a really strange moment. We could talk all day about this. I definitely have more questions. I'd love to ask you what we'll have to find another time to keep going. Obviously, you both think enormously about generative AI, you think about academic integrity and

proctoring. What do you see as other exciting trends in the EdTech landscape when you sort of look around from both of your positions out on their luck? What sort of coming around the bend? Let me start with you, Jordan,

Jordan Adair

I'll harken back to what I mentioned earlier, I think the most exciting thing that the AI revolution has kind of sparked is the how can we make authentic assessment more scalable? Because that to me, authentic assessment is nothing new been talking about it for 15 years, and everyone has always struggled on how do we actually incorporate it in a way that doesn't require a person to spend their entire waking life greeting the output? Exactly.

You know, and I think AI while I don't see a definitive leader in that realm yet, because it's also new. I think that just the window of opportunity that is now open in that area is very exciting. And something something cool is going to come out of that at some point in the very near future, I believe.

Alexander Sarlin

Yeah, expert grading reward models, trained by experts in all these fields that then can actually go and give you good feedback. It's it makes a ton of sense. Paul, how about you what's coming around the bend?

Paul Morales

I would say the things that I'm most excited about are the capability to have really custom tutoring and custom delivery of education in the form that you like, right in the form that works best for you. I think that it's going to add so much to give folks that maybe you're underprivileged don't have access to the same types of educational institutions, whether it's financially or maybe it's just geographically, right? Maybe it's they cannot get to that institution or can't afford to

live there. I think it's going to allow for that top tier type of catered tutoring. And, you know, I guess attention that we all want for every student out there, I don't think there's anybody that says, hey, I would rather that that person doesn't have access to education, I think we're a better society.

And if we're all better educated and have more experience and get better exposure to different points of view, and I think it's great that you can actually, I mean, think of somebody an example could be think of somebody that is living somewhere where it may not be popular to ask about other cultures, having the ability to say, Hey, I truly do not understand this, why is it that these people may think this way, and being able to have something kind of give you some feedback on that I think the

opportunities are endless. So I would say that's what I'm most excited about. And I'm most excited about seeing what the next generation is able to do with it. Because I do think we have some major, major problems that we're all going to have to figure out in our kids kids are going to have to figure out that we can use every tool at our disposal to help solve for that. So that's yeah, so

Alexander Sarlin

I'm hearing ideas about you know, personalized learning in the medium that you really want that leads to more equity leads to better outcomes for our students who don't have access to certain kinds of resources. It is incredibly thrilling. I think what both of your trends have in common is, you know, something that was unscalable. You know, Jordan, it was it's grading

authentic assessments. And for Paul, it's personalizing, you know, having a tutor that knows every student or you know, they can they know the preferences, suddenly, that's just within reach. I mean, really soon. It's a very, very exciting time for all of us. I love both those predictions. And what is a resource you would recommend for somebody who wants to learn more about the topics that can be a book, white paper, and newsletter, you know, anything that you think people should

really open and read? Let me start with you, again, Jordan.

Jordan Adair

So I didn't anticipate to make this recommendation. But we talked so much about graphic design and art generation that a really cool place to check out is called Midjourney. So there's a discord channel. And essentially, all of the prompting, and generation of the art happens real time in that stream. So it's a really cool spot to kind of learn and see how prompts can so drastically change the output, if you're talking about graphic generating

AI. So for anyone who's interested in that, and that topic, I would highly recommend checking that out,

Alexander Sarlin

you can see the difference between beginners and advanced users very clearly there to make your point earlier. Fantastic poll. How about you? What resource would you recommend?

Paul Morales

Yeah, I would say on the education side, and more like easily digestible for everybody, big fan of Salman Khan in what he's doing with Khan Academy, and all the great folks that are working there, especially with the tutoring front and conmigo, and those types of things. So definitely, he's posting talking a lot about the work that they've done with open AI. So the blogs, posts, Twitter, that type of stuff with

them. And then if you want to get really nuts and bolts, obviously, check out some of the podcasts that are out there. Lex Rubin's is one of the most in depth, where you can actually dive in and get some more of that information. And I would just say, you know, set some key target words for you know, AI news as they come through everyday like it's everybody's figuring this out. So it's hard to kind of put a finger on one source just read and digest as much as possible and get as many

different opinions. That's probably one of the best ways to do it. There's so many different forums out there from your your Reddit, you could go to Hacker News and read up a lot on that there's a lot of places to find information. That's the

Alexander Sarlin

Definitely. As always, we will put the links to challenging part. all of these resources in the show notes for the episode for conmigo, mid journey, Hacker News and some of these AI sites. I also will put the link to the interview that we did with Kristin Dicerbo from Khan and conmigo talking about exactly that how they got early access to open AI was our number one most popular episode of the year so far. So it's really interesting. Thank you both so

much for being here. This has been a truly fascinating discussion about the present and future of AI and how honor lock is really paving a path to figure out these very, very complicated questions about integrity and assessment. Thanks you both for being here with me on at Tech insiders. Thanks for listening to this episode of Ed Tech insiders. If you like the podcast, remember to rate it and share it with others in the tech

community. For those who want even more Edtech Insider subscribe to the free Edtech Insiders newsletter on substack.

Transcript source: Provided by creator in RSS feed: download file