#240 - Authenticate 2023: Biometrics with Stephanie Shuckers, Professor at Clarkson University and Director of the Center for Identification Technology Research - podcast episode cover

#240 - Authenticate 2023: Biometrics with Stephanie Shuckers, Professor at Clarkson University and Director of the Center for Identification Technology Research

Oct 25, 202356 minEp. 240
--:--
--:--
Listen in podcast apps:

Episode description

In this episode of the Identity at the Center Podcast, Jim McDonald and Jeff Steadman discuss the fascinating world of biometrics with special guest Stephanie Shuckers, Professor at Clarkson University and Director of the Center for Identification Technology Research (CiTeR). Stephanie provides insights into her work at CiTeR and her journey into the field of biometrics. She explains various types of biometrics, including voice biometrics, and addresses concerns about privacy and biases. The conversation delves into the technical aspects of biometrics, exploring how it works behind the scenes and addressing the topic of deepfakes. Stephanie also shares her thoughts on backup plans when biometrics fail and the challenges of biometric performance. The episode concludes on a lighter note with a discussion about alternative authentication methods and the intriguing "dead finger problem."

Connect with Stephanie: https://www.linkedin.com/in/stephanie-schuckers-13a44a9b/

Center for Identification Technology Research (CiTeR): citer.clarkson.edu

CiTeR YouTube Chanel: https://www.youtube.com/@citer2285

Connect with us on LinkedIn:

Jim McDonald: https://www.linkedin.com/in/jimmcdonaldpmp/

Jeff Steadman: https://www.linkedin.com/in/jeffsteadman/

Visit the show on the web at idacpodcast.com and follow @IDACPodcast on Twitter.


Transcript

This is Identity at the Center. If it has anything to do with IAM, this is the GoToPodcast. Now your host, Jim McDonald, and Jeff Steadman. Welcome to the Identity at the Center podcast. I'm Jeff and that's Jim. Hey, Jim. Hey, Jeff. How are you? Not so bad yourself. So yesterday we did the keynote version of Identity at the Center. I know one of the things we had to kind of clip some of it to stay on time. One of the things we wanted to talk about was like what the show is all about.

And for me, the punchline is always to educate and entertain. And the reason I'm bringing that up was this morning I was at the key notes. Very educational and very entertaining. There was a guy named Mike Slough from Amazon who's up presenting. He said, people say, wouldn't it be neat if your name was Coleslaw? And then he said, well, if you saw my full name is Mike, it's Michael. Michael Slough. Michael Slough. Michael Slough. So hilarious.

And then he went on to give a very educational presentation. So if folks have an opportunity to view that later, I want to get him on the podcast as well. And I agree to that already. So I think we're in good shape. Did you get something in writing or like actually it is in writing. Get something in a blackmail, you know, whatever it takes. I think he's I think he likes being on stage. He likes talking. So I think we're good. What did you think of the food trucks in last night? Oh, I loved it.

I mean, the only problem was everything was like full size, right? So it was like you had to decide what you're getting. But I made a fantastic choice. I got a a year old. You know, that's one of those words like, OK, is it a year old? Is it pretty sure you're saying it right? Can't know. But people around here seem to call them chiro's. OK. I didn't have that. I had tariaki beef, rice, bowl thing. It was good. I know people went up for multiple trips.

So you weren't locked in to just one choice. But there were also donuts or where. And Adrian is sitting here with us. And I'll tell you right now, there was like a Bill Jerome donut station last night in the vendor hall. I haven't seen that before, but like chef's kiss. No. It was plain donuts. And then they had like just a bunch of toppings, like strawberry, chocolate, and vanilla glaze. And then so I made a chocolate walnut donut. And it was, I found my new jam. Yeah. Chocolate walnut donut.

Chocolate house. Yep. And tonight there's going to be more activities. We're going to be on the, what is it? The superstar lounge or something like that? I can't wait. Here was the green or something like that. Adrian's is legend's lawn. Okay. Legends lawn. If they're appropriate branding in place. Yes. In our own mind. Yeah. Our own mind. So we're going to talk a little bit about what biometrics. I think so. We have the right person here to do so. We sure do. She's Stephanie Schuckers.

She's the professor at Clarkson University and director of the Center for Identification Technology Research or CITER. Welcome to the show, Stephanie. Thank you very much. Thank you so much for taking the time here. One of the things that we like to do on this podcast is really learn identity origin stories, how do people get into this business? You and I were talking a little bit before we hit record here. And I'm going to ask you a question a little bit.

But for people who aren't familiar with Stephanie, tell us a little bit about yourself and the Center for Identification Technology Research. Yes. So I got into biometrics after I graduated with my PhD in pattern recognition, mostly medical oriented. I ended up in West Virginia. And you may or may not know, but West Virginia is where all the fingerprints are stored with the FBI physically in databases and electronic databases.

And so they said, hey, university, you should be doing more research around biometrics. And what are the tools to biometrics pattern recognition? So that's how I got into it. Do you consider yourself part of the IAM community? Yeah, that's a great question, because I do think we grew up a little bit separate families. It was the biometric family and the identity family. And as time has gone on, biometrics originally was used very much in use cases that would be like the FBI.

But then moved more into use cases related to identity proofing and related to authentication. And as that movement happened, the two communities started to merge, which I think is fantastic, because biometrics is just a tool, a very simple tool, but it's the context in which it's used that makes all the difference and gives the real value of a biometric. So yes, I do consider myself an identity professional. Not a denny nerd. So we welcome the water's warm. That's right, I welcome. Of course.

The thing I like about this industry is it is a very friendly industry where there's, I think less competition, I'm sure there's vendors right here competing against each other, but we're all trying to do the same thing. It's authentication, authorization, things like that. How long have you been in biometrics total? 25 years, actually. It's my 25th anniversary. Congratulations. Congratulations. Yes, thank you. Why biometrics? Yeah, great question.

And I think it goes down to the very fundamentals of what value it provides. It is a connection between the biological cells and space, right, and the electronic, the digital. And it is something of value that really no other form can provide. It doesn't work well just by itself, right? As I said, it has to be part of an ecosystem, but that provides that sort of extra layer of information to make that connection. And there's really two important pieces to that.

The first piece, of course, is matching. You have a reference from some authoritative source or some enrollment step, and then there's the second step where you're matching against it. And that's where the pattern recognition has come in. But I think there's a second piece that's equally important, and this is what we call liveness, right?

And so that's ensuring that the data being used to make that matching decision came from a legitimate source, that biological heart beating, electronic brain thinking person, that's their present at the time of capture. And so those two things give the true promise of biometrics. I'd like to say carbon-based life form. Is that appropriate? I write, who are you excluding in this carbon-based form? You gave a presentation here to authenticate, for folks who weren't able to make it.

Can you briefly summarize what you talked about? Yeah, so what we're doing in FITO is a certification program. So biometrics to really give its promise needs to be trustworthy. And how does it need to be trustworthy? It needs to work well for you all the time, as well as reject anyone who's trying to attack it, and they can be attacking it more in a different person, just trying to use your account, or it could be as an attacker making a fake biometric.

And so this certification program enables vendors to demonstrate how well their product works and those that rely on those decisions kind of confidence, both relying parties as well as the consumers. So tell us a little bit more about this Center for Identification Technology Research. I'm just going to call it CITER because it gets easier. What is the purpose, the mission, take us behind the scenes if I were to lock in what's happening?

Yeah, so we're a National Science Foundation Center, so that's the US Funding Agency for Science. And we're in a program that's industry-focused. So we're industry cooperative. But in our case, we also have government members. And what we're looking at is really more on the plied side. What are the pain points in industry and government around technology, both biometrics, as well as general identity, and how can our research help with those pain points?

And so we do a lot of very applied research, but we also do basic science research, too, because sometimes we need to be very forward looking in terms of what some of the problems are. And in fact, CITER started, as I was a young professor, I said 25 years, well CITER itself just celebrated 20th anniversary last year. And so I wasn't the director at that point, but we've been working in areas like live in this detection for since the center started.

So we have to be thinking about what's the next problem we need to solve, and that's what we do in the center. You know, I think one of the top definitions that you have to learn as an identity practitioner is what is MFA or multifactor authentication? So we say it's at least two of the following. Something you know, something you have, something you are. I guess I think the biometric is essentially something you are. Is that the right kind of definition for biometrics?

Yeah, I think that makes it very straightforward, as to what it holds, what value it holds, in an ecosystem that's multifactor. And then I've also heard what you do, and that incorporates some of the behaviors, which we often use in multifactor authentication. Are those considered biometrics? So this would be like your gestures, right? Yes, yes. There's some behaviors that you can use in the way we use biometrics in terms of comparing against a reference.

And there's some behaviors that would be more like geo-behaviors, right? You know, so I guess in some sense there is a reference in terms of your movement, but it's a slightly different, you know, use case on how it comes together. But generally in the world, biometrics, they've all fallen. So in general though, what are the different types of biometrics? What, you know, give us kind of the glossary version. Yeah, so, you know, there's the kind of face finger iris, right?

Because our, you know, main biometrics, but there are many other biometrics that we've seen in operation. People get a little confused between iris and retina. So iris is the color part of your eye, and that's what's mostly used today. Retina is the back of the eye, which we don't use very much. Of course, voice biometrics is another one that has very specific use cases as well as DNA. That can be useful.

But, you know, I think really it's about, you know, what the value is, you know, in terms of the ecosystem. Choice is good, right? People like choice. I like having choice, you know, in my biometrics on my phone. And so, and choice may be an ecosystem, you know, based on reason that why one modality might be selected over another. What's the best biometric? And what's the worst biometric?

And I'm going to qualify that, meaning security is, you know, a fingerprint less secure than iris or voice or other types. I love this question because it is a common question. And I like to answer it that it's not the modality that you need to look at, but the actual specific implementation that will tell you the security posture. Because in order to put together this matching and this liveness, you're talking software and hardware, right?

And one fingerprint system could be, you know, much, much worse than, you know, an iris system and then it's too conflict just based on the underlying technology under the hood, which of course gets back to the value proposition of things like certification, right? You know, that helps you, you know, look at the product as to whether it's achieving what it's setting out to achieve. Is there a common mistake that you see for deployments of biometrics or it's like, to your point, right?

It's not the modality, it's the implementation. Is there like a common thing that comes up? Or is it like, oh yeah, that should have been deployed a different way. And we'll certification help that. Yeah, yes. So one of the things that with all the attention on matching and matching performance, you lose sight of the front end, the actual capture of the biometric because a good quality capture gives you good performance.

But if you're only testing databases, you know, you're not doing anything with capture, right? So capture, and that's that user experience, you know, that we've been talking about in terms of, you know, Vido and Pascase, that same user experience applies to your biometric. And so, so if you can help the individual through your user interface to provide a better quality biometric, you're going to get a better experience. And that often gets ignored.

And liveness plays a role in that because liveness might, you know, put a barrier up. And you just think of like, caphes, right? You know, they're a big pain, right? You know, I mean, I stare at those things. Are you sure you're not good at that? So maybe you are a robot. Yes, exactly. Exactly. That's what a robot would say. Yeah. No, just so something I was coming to mind as we were talking about this was how Apple, you know, have the face ID work great, I think it worked great.

Then all of a sudden, went into pandemic mode, all had masks covering our faces. Some places it was okay to pull your mask down. Some other places not really. But they adjusted, right? All of a sudden, at some point, from my perspective, the East Side D started working again. So what was that? Are there like dials that were, he said, okay, we can get by with half of the face or? Yeah. Yes, absolutely. So the algorithm that does the matching, you know, works based on the data it's trained on.

So if, you know, people wearing masks was just not a common thing, you know, there probably wasn't much representation of that in their training set. And so if you retrain your algorithm to really look at top half of the face rather than the whole face, you know, maybe conditioned on the presence of a mask, you can achieve performance. And you may be asking, is it, you know, going to perform not as well, right? If you have less information.

And I would say there are use cases where that may make a big difference, but those use cases are searching very large databases. From the case of a one-to-one match on a phone, your device, I don't think there's a significant difference, you know, from the point of view of security, if you have half face versus all things. When you say perform not as well, what do we mean by that?

Because I typically see like statistics as like it's 99 point something percent, I don't know, positive or accurate or whatever, you know, terminal you want to use. Are we talking about still in that range or are we saying more like, well, we went from 99% to maybe 95% or 85% or something that is just not as, you have to use maybe other factors to try and triangulate the identity. Yeah, great question. And I think the answer to this question is very use case dependent.

And so in FIDO, we are focused on authentication. So that's a one-to-one comparison. We're not searching large, large databases. And so the metrics that we came up in our certification program are based on that use case. And so we selected requirements, you know, based on the use case that makes sense, right, to have the right security posture as well as give the best convenience for the user.

And when I say it's not noticeable, it's not noticeable because, you know, you're still meeting the requirements that we need to achieve both of those. And the security, so if someone says 99.9% to you, you know, you'll want to look at them skeptically because you really need multiple metrics, really, because there's kind of three types of things that you want to look at, you have your genuine user performance, right?

You've got your possibility if I grab your phone and put it to my face, you know, whether I would accidentally match, we call it the false-except rate. And then your last is kind of your vulnerability. If you're trying to hack it with a photo, if I took a photo of you and tried to get into your device with that photo. And so those are the things we're testing for and we have metrics for each of them. It kind of feels to me, like I don't want to pick on Apple.

I think to do a lot of great things, but it kind of feels to me like the Face ID and the thumbprint reader was more a matter of convenience. So in other words, like if it can't see your face, you punch in your six digit pin, which is not super secure, right? It's a knowledge-based factor. Is that right or am I looking at it the wrong way? I would posit that yes, it's convenient. We like that convenience, but that doesn't mean we're giving up security in order to get that convenience.

And so I think that usability, having that backup plan is certainly, you know, I use my pin every now and again because my right hand is busy and my face isn't working and so I take my pen right. You know, we all have that experience, right? I hate how the face doesn't work. Well, whether you do it or not, the pin is there. Exactly, exactly. So as a gift, that's part of the user convenience, right? So it isn't so much that I've like given up some security to use that face.

I think today, the way the systems, the good systems perform, right? They provide significant security because if I were to have to hack into your phone, I probably wouldn't start with a photo of your face. I'd probably try to find your pen or try to watch you, you know. Shoulder, shoulder surf as my first. We're trying to run aversory. My first go. The Wendell Sider start. That's right. So I mean, we've made progress. We really have made progress, right?

Because biometrics work quite easy to spoof, you know, 20 years ago. So when you went over the list of possible biometrics, when you mentioned with the boys, I find it just like a use case that could be very usable, right? And you call the help desk where you call a call center, matching your voice print to what it should be. But you don't see it as much as I think you would like to see it. Is that because it's inherently flawed? No, I think it's much more use case dependent.

So, you know, if I'm trying to log into my phone and I'm sitting in a conference, I'm not going to use my voice, right? It just wouldn't be convenient. So the use cases that do work great for voice or call centers like you bring up. And you know, certainly that's, you're seeing that adoption. In fact, I think on the keynote in the first day, they were talking about, you know, what do they verify your identity with now when you call a call center, right?

You know, your name, your address, you know, your email address, your phone number, things that can be easily gathered about you. And so adding a voice does provide extra value, you know, from the call center point of you. Of course, there are attacks on that too, you know, so that's the spoofing side of the equation. So we do need to be aware of what's happening in AI and deep fake voices in terms of understanding that data.

But I still think, you know, where we are today, if we add voice, we've added, you know, making the attack just that much more difficult. And when we're talking about major scale, every little increment makes a difference. Yeah. Jeff, you rolled out voice biometrics in the past. Yeah. And was it pretty well received? I mean, what were your learnings from that? My learnings were, I don't know much about biometric or voice biometrics, but I rolled it out.

So I learned a lot and learned a lot of things around different transient characteristics of a voice. And it was effective. Did we get enough uptake on it? No, I don't think you ever get as many people as you want registered, but it did work. And that's what it been, 2009, maybe 2010, some internet area. So I'm sure the technology's gotten better, but it did work. It just, for whatever reason, people just weren't comfortable, I think, calling into this.

So what we had done is we set up an 800 number and it was a call center type environment. And it was a way to call in and reset your passwords. So you would say, so and so, my voice is my password, which for whatever reason seems to be like the common enrollment phrase for every company. Can't think of anything else. And so you would, you know, you had to enroll with it. So that was the first step, right? Because you had to enroll with it, you immediately lost 100% of the population.

It wasn't ever going to be 100%. Only some percentage was going to do that. You had to enroll, you write a stir, and then from there, you had basically like a phone menu that would let you reset passwords. So it worked. It was okay. I think we struggled with adoption. I don't know if we were just too early, didn't incent users enough. We had multiple ways also to reset passwords. So more people gravitated towards knowledge-based authentication and going through like a web form, basically. Right.

I liked the phone version because it could reset passwords that you couldn't get to. If you couldn't get to the, you know, the internal internet site that will do the knowledge-based authentication. I mean, our industry is so fraught with now these basic fishing attacks to steal people's credentials or to get logged in as those people. I mean, a recent one in the headlines was folks, the attacker called the Helpdesk and got reset.

Now, I would think that a voice verification would have been an ideal way to match up passwords. Had they not registered, they wanted to get in. They're saying, I'm calling from very far away and I can't, you know, there's no way then that you reset their credentials, right? Your manager has to speak to you and then call me. I think sometimes helpdesk, they're trying to mean these metrics. Like keep your call short. Yeah. And so I don't want my call to take- All type resolution, all that stuff.

It definitely plays into it. But you cannot sacrifice security just because you're trying to get someone off in two minutes and 30 seconds. If it takes two minutes and 35 seconds and it's a more secure, you should be doing that. Yeah, or even if it takes an hour. Right. So, Stephanie, one other question that I think that comes to mind, especially with biometrics, I think is like privacy. How big of an issue is privacy when it comes to biometrics?

I think a lot of them get stored on the local device and there's other biometrics that maybe get stored on the server somewhere and can they be stolen and reused? Yeah, great question. So to kind of ask the last part first in terms of stolen and reused, that's really where liveliness comes in because even if someone were to steal my biometric, they have to take that biometric and they have to present it to the biometric system.

And so, a stolen biometric is not really equal to stolen password because the way to take a stolen password, you just enter it, right? To take a stolen biometric, you have to do all the fun stuff I do in my lab in terms of making masks or printing out gummy fingers. And so, or doing other kinds of checks like man in the middle attacks, hacking it. So the stolen biometric is one piece of confusion for sure and we need to do a better job of getting the message out, that this is quite different.

But that aside, privacy is still a concern. I think people understand that a lot of data is being collected about them. Obviously, their identity information, their personal and biographic information, but as well as their geo information, where they move, where they're browsing, and their search history, and what they bought, right? So, I think people are sort of feel this kind of like, oh my gosh, one more thing they're going to collect about me is a piece of it.

So, we can see the value it provides that we've talked about connecting to that specific individual, but we have to get the clear message across. And that's what Fido, I think, did really well at the beginning because the in-in Fido authentication, you set it, your biometric store locally, you prove yourself your to device, and we take the cryptography, takes over, and proves itself to the relying party. And Paskees continue that, right?

But there are use cases, identity proofing use cases, that, you know, the authentication doesn't solve, right? And so, now, if we want to do more, where something's stored in the cloud by a trusted source, right, that's holding your biometric, you'd like to know a little more, you know, about it. Think mobile drivers license that are coming down the pike, right?

And so, you know, and I think the key to that is really about best practices in use cases, and really thinking about privacy just in the way we think about privacy for all our other personal data, and we're asking individuals. And if we can improve the user experience, just like you said, I liked it because it was so convenient, you know, before if we can do the same for remote identity proofing, right? I like it so it's, because it's so convenient.

Oh, by the way, I don't have to give you a bunch of 12 other pieces of information, right? Now, I can bypass that because I have a more reliable piece of information to make that remote identity proofing decision. You know, there were a couple of product managers from Google presenting this morning at the keynote, and they do a lot of like user research, right?

So they have these folks that do the thing, and then they take a survey, and one of the things that they found, one of the metrics that drives them is that people need to feel the system is as secure, more secure. It's just as important as actually being more secure, which we know it is than past keys over passwords. I know how to solve that. You just put a little image of a lock in the corner. Yeah, that's it. That's it.

But I think on the privacy front, one of the reservations that people have is that, all right, you're looking at my picture, how long is it going to be until some database gets cracked? My picture is all over the internet. It's probably not going to happen, right? Maybe it could happen if someone did a really bad implementation, but how do you get that? How do we change people's perception of that?

Yeah, I think it's a concern, and I think it goes back to the fundamentals of even though it's stolen, it's not useful, right? I mean, already our face is quite public, right? It's in databases. I've never kept stepping out. There's a camera somewhere. Exactly. Exactly. And Face ID is not breaking, right? Why is it not breaking? Because it's embedded into a good ecosystem in terms of how it works.

As we roll out additional things, we have to think through these scenarios in ensure it doesn't break just because that information might be public. Now, I mean, there's real legitimate concerns about scraping people's faces into large databases. And these are not the types of systems that we're thinking about when we think about identity proofing. We're talking about identity proofing for people who want to opt into these types

of solutions. But the scraping is a big deal. And I think that does hurt biometrics because we don't have, at least in the United States, guidance around what is OK to do and what is not OK to do when it comes to face, whether it be large scale databases or its surveillance and certain scenarios. And so I do, as a citizen, I'd like there to be legislation. I know there's a lot of work being done around that, and hopefully that will come to pass.

Now, I think sometimes we think, oh, the privacy folks are the ones with the tinfoil hats, right? But I think it's, I don't know, I think it's very much a legitimate concern that there's kind of this dystopian future where you have no privacy. And just look at the example of the cookies. So we had this law. You can't just put cookies on people's computer that they're going to have to agree. Now it's like every website you go to, it's either allow cookies

or don't use the website. Privacy policy. And I was every site. It's like, OK, am I going to read this 20 page privacy policy for something I have to do with five, I want to buy a sweater? Do I want to read your 20 page privacy policy of what you're going to do with my data? I think it's, people have a legitimate reason to think, right, this is designed. Nobody

reads that privacy policy or let's say 99.9% of the people never read it. And it might say, we can give your data, we can sell your data. Right. And there's guardrails, right? That's what we need is guardrails. And there are some guardrails, GDPR, California, Illinois. There are examples of guardrails. You know, written, the problem right now is that it's a very fragmented, you know, set of guardrails around that. And I think we can be more specific when it comes to biometrics on what you can

and can't do. But you know, I think fundamentally, you know, that that's a piece that, you know, will help with the value proposition. Because in the end, you know, we see the convenience, right? You know, and so being clear how you're going to use the data to your customers, how long you're going to keep the data, who was going to have access to the data, how to see what data they have, all of these very classic privacy principles, you know, need to be applied

for every biometric system. Can you see bipartisan support for something like this? I think there is. The answer is no. What I'm saying here is, and I want to define bipartisan, what I mean is users and companies, not politics. Because I can't imagine fragmentation of rules,

regulations, laws, whatever it be, is good for anybody. I would rather as a vendor in this space, if I had a product, know what my rules of engagement are versus well, I have to do something different in California versus Illinois versus a pick a state or a regional country, whatever it may be. I guess why aren't we seeing more of a support for, give us the rules? I know nobody wants more rules, but give us the rules so that we know how we should be creating these processes.

Yeah, I mean, I think they're coming, you know, it's natural that regulation lacks technology, you know, technology, of course. Of course, crazy. It moves very quickly, and only moving more quickly, you know, so, but I think they're coming, and I think the industry, you know, for the most

part is policing themselves or pre-anticipating, you know, what's down the pike. And it's just, I think where the issues come is some of these kind of edge cases, you know, really kind of tarnish really the whole industry, and that's the part that, you know, some full-gar rails would be very helpful. Can we go behind the scenes of what actually happens with biometrics? When someone says, or let's say, you know, I'm using face ID to log into my phone,

what is actually happening? Pictures taken of my face, and then it's matched up against something, and some algorithm. Can you take us just behind what, you know, maybe let's pick a couple of these cases, face and fingerprint? Yeah. Yeah, so, I mean, what you said is exactly right. I think where it goes a step further is related to live-ness. And so, so a face, there's features within the face, right, that are captured now. We use deep learning or AI as the algorithms that's

trained to learn what are features to differentiate individuals. And then those are the features, you know, that are compared and create a score, you know, to make a decision. Well, I'm going to go this a step further and it says, okay, now I'm going to take some information from this, you know, face, either, you know, in a video stream. I may have extra sensors. If you think about face ID, there's a near infrared camera, there's structured light,

shining on the face. It gets a 3D representation, shines a specific pattern, you know, for you. And so, it's adding some additional pieces of information to make it more difficult for an attacker to come and create, you know, use a photo of you in order to, to, to fake the system. And so, so the systems are getting more complex. There's actually even more steps, right? I talked about quality already, so making sure you get a good, a good image. But then there's also

how we secure the data. And so, you know, when, when the data is still locally under your device, in a secure part of the phone, that's one thing, but as we move to systems where the, the reference that extraction of features is stored, you know, on a server side, we can actually do some more sophistication, you know, to, to extract, you know, essentially some key from your biometric that

gets stored, that can't be reverse engineered to your original biometric. So, so the systems have a lot of sophistication and, and are continuing to research these different aspects, some of the things we do in the center, but also we're seeing these things in commercial operation. How important is it, is the halves and halves knots when it comes to having technology that can

accurately perform these transactions to validate identity? Yeah, I mean, I know biometrics, you know, are more oriented towards the higher end phones, but I think, you know, cameras are in kind of most phones these days, you know, so, so at least, and from that point of view, you know, faces available, fingerprint sensors can be very, very cheap, you know, the capacitive,

piece, capacitive-based sensors that are incorporated in phones. So, so I think you are seeing it rolling out to not just the higher end phones, you know, but across the ecosystem. The figure, everyone, I find interesting because there's been a few variations of that implementation, especially over the last years, you've got the sensor itself and now they're trying to put them underneath the display, you know, of the the goal of, I just want a slab of display,

I don't want to have like, dazzles, right? Right. But I think some of those implementations, at least early on, struggled with being able to be either accurate or consistent for the end user experience. Do you have any experience in an area, or am I really nerding out here? No, no, no, I'll let me nerd out now. So, so absolutely, I mean, fingerprints, there is a, you know, a small percentage, but a percentage that of people's fingers that just don't read well,

right? You know, and that can be due to, you know, doing a lot of gardening or just to significantly dry fingers or gardening. Yeah, just like a lot of work with the hands. Oh, it's so worn down, maybe that keeps your squirrels in. Okay. Right. You know, and so, so, so I think when you see, you know, kind of problems around fingerprints, or maybe it's a super cold day, right? You know, and you pull out and you put it on, you see it doesn't work because your hands are very, very cold.

Because you have gloves on. Right. Exactly. Right. Yeah. What's my thumb here at work? This is where I live, you know, that's pretty common occurrence. And so, so, so, you know, some of that variation and performance is because of that. And of course, you know, sensors get better, right? You know, in terms of, of being able to adapt to those variations, as well as, you know, providing,

you know, enough biometric information to make a good decision, you know. So, so, as all technology, it has improved, you know, but there is still kind of a fundamental group that it can be quite difficult to take a fingerprint. That adaptation, I would expect extensive things like biases and disabilities. How much of a role do those play into capturing a valid biometric credential? Yeah. Choice is good, right? Choice in terms of, you know, of modalities, you know, to ensure,

you know, I mean, I like my choice, as I said earlier. But, you know, moving a little bit towards biases. So, so there was some fundamental work looking at a biometric performance for different demographic groups, different ages, genders, race and misidies, skin tones, and kind of a realization that if we just look at a global number, we may be missing, you know, what's going on for groups of people. And what we want for our technology is to work well for everyone, right?

And so, so by slicing and dicing our data across demographics, we can look at performance and ensure it works well for all. And, you know, there's been significant improvement that, you know, now that that scrutiny has been looked. So, NIST does some very extensive testing around demographics. A phyto is looking at putting together a certification program. So, I think for your best solutions, those demographic differentials, you know, are much, much smaller or non-existent,

particularly for the genuine user than they were. But that doesn't mean you shouldn't look, because, you know, there are lots of systems out there, right? You know, and so that's, you know, part of understanding the technology that you're adopting. The specific technology adopted, not just whether it's a fingerprint or not. Forgive me for this naive question, but how does skin color play into something like a fingerprint or facial recognition?

Yeah, so from the face recognition point of view, you know, I think there was a lot of press around, you know, making cameras that will work well for everyone. They do detect faces, right? They do make auto adjustments, you know, and so in the same way, having a good photo with somebody, you know, you know, it's good when you want to take photos, but also, you know, being able to do a

good biometric match means having a good photo, right, to begin with, right? And then if our algorithms are AI-based, based on training data, if our training data is representative of, you know, the population, you know, then it isn't going to know what to do, right? When it sees faces, it's really never seen before, you know, and so that's about, you know, having a representative training set.

Is that more likely to lead to false negatives or false positives? Because it seems to me like not having that footprint, I guess, could lead to either one, but it's more likely to lead to a false negative, because here's what I think of my head, right? I have no research

to back this up, so I know you don't operate that way, but I do. So if you had a very dark skin tone, and you're relying on infrared kind of like scanning your face and turning into a, like a calculus problem essentially, if what it's doing is measuring differences, maybe the camera just doesn't work well with very dark skin, for example. Yeah, so it can actually affect folks. So in Fido, what we're

focusing on initially is just how your sense is. We want the genuine user to have the same experience, you know, as other users, right? And so we're looking at the differentials for the false negative use case, but you shouldn't bury your head in the sand to say, okay, so we only need to worry about false negatives, not false positives. So what are false positives? So false positives would be cases where two different people just happen to match, or you're doing a tack with a spoof.

And so that's the thing we're going to look at, is this kind of a second thing to look at, right? But you would want to know if the false match rate for one demographic was significantly higher than another, and that can happen if the system isn't trained well. Yeah, one thing I'm doing is, so I have a beard for people who can't see me. Haven't met me before. And it doesn't seem to matter how long the beard is, right? And I've heard that if people got, like what do you call it,

rhinoplasty or a facelift, the face ID will still work. So it seems to me, it's like, must be implementation specifically, how much difference are you willing to accept from whatever stored in the database? Yeah, yes, exactly. There's a threshold, and there's a point at which, you know, it won't work, right? When you have alterations, and then you may need to reenroll. So one last question, so the topic is around deep fakes. How are you thinking about deep fakes,

and how that could affect the biometric space in the future? Yeah, great question. So what deep fakes can do is they can take, say, a still photo of you that I might have as an attacker, and they can animate it, right? You know, that's one of the things that deep fakes can do. And so, so if my biometric system and the way it operates requires some kind of motion, you know, in the face, then that can be used to attack a system. It can be used in two ways. It can be used, you know,

like if you think of your face ID on your phone, right? You can present that deep fake to the phone. Well, we all know that's probably not going to work on face ID, right? Because face ID is looking for a three-dimensional thing, and I've now just displayed a 2D deep fake, right? You know, so probably won't be successful. That doesn't mean it won't be successful in other scenarios, right? But you can also take that deep fake, you know, and you know, bypass or inject as a replay attack.

And so, so especially systems that are kind of widespread and people have easy access to, you know, as an attacker to, you know, create a virtual camera or some kind of way of bypassing. And in those case, we still can use approaches like liveness, when we call them challenge response, to probe the system, to see if there's truly a legitimate person there. And so if the deep, if the deep fake creator can't pre-anticipate what challenge response is going to be present,

then it makes it that much more difficult for them to mount the attack. So we are, many biometric systems have these kinds of solutions in them that not only do the liveness from thing physical mass, but also deep fake attacks. I feel like we're in this identity space where it's the finish line keeps moving or the good enough line keeps moving, right? The attackers come up with an attack, the industry comes up with a barrier, the attackers figure out how to go and get that.

Is it kind of think, okay, with deep fakes, you know, that's a big hurdle to cross to get to somebody's account, but it would be worth a certain account, so I'm sure, but also now AI is coming and getting, you know,

just exponentially growing in terms of its capabilities. So maybe these kinds of attacks become more commonplace, so we are then forced to move that security line a bit further, but it seems to me like it's, you know, it's moving, it's progressing on both sides as we go, and it will continue to do so. Yeah, absolutely, and that's why we want to pre-anticipate, right? You know, the types of attacks that may happen and what the solutions may be, but, you know, absolutely, it is sort of a,

you know, a race. An arm's race, right? A race. A face race. That's right. When you say predetermined, or pre-anticipate, what do you mean by that? Yeah, I guess I'm referencing in the Research Center, you know, we're doing research thinking about, you know, what is the next thing that we will need,

right? You know, and so, so, you know, in terms of, you know, blindness, quality assessment, you know, securing templates, you know, with non-invertebral, right, templates, can't be reversed engineered, deep fake detection, right? And these are the things that we're doing in the center, you know, to help enable industry, you know, to, you know, be in the position to adopt and provide it. There's a whole YouTube channel around CITER that does a whole bunch of videos around biometrics.

We'll have a link on our show notes so people can check that out. And for my cheap friends out there, they're free. It's YouTube. Free education. You might have to watch the ads, but probably worth it at that point. Is there a certain type of deep fake that you're especially concerned about? Because I think, for me, voice seems like the one that is easiest for an attacker to execute against. But I'm not sure

that's right, or there's something else that I should be thinking about. No, certainly, I think that's the one that's risen up quickest. Because when we think about it, we think, and I think you reference this, it's how easy it is to mount the attack, which then will tell us how scalable the attack is, right? Because now it's not, now we're, we do have to target somebody, right? So we have to make a decision, right? We have to go find their voice sample, you know, in this case, and we have

to train a system, right? So we are still at, we're at the targeted phase, you know, and so, so, you know, it does take it kind of one step more. However, the, the, the, the attacks themselves, right? How easy is to make them? Ken Verry, right? And so right now, I think why voice is risen to the top is because it is just super easy. There are these tools out there, train it quickly, boom, type in whatever you want it to say. But, you know, voice communities not standing still,

right? They have been developing algorithms to detect synthesized voices. And so, you know, they're, you know, again, a technology that's getting better and better, right? And that kind of arms race in terms of detection. And so, I do think that it still has a role to play. Interesting. We'll start to wrap things up. If there was, I got a couple of questions here,

and we'll kind of end on a later note. If you could not use fingerprints, face, or iris, and let's put voice in there as well, to perform biometric authentication, what would you choose next? Yes. So, one biometric technology that has been around actually for a while, but I think is very useful is vein recognition. And so, I've seen both Paul Vane recognition and Finger Vane recognition. And you do need a specialized technology, you know, so it's not like face or cameras are

kind of ubiquitous. You do need kind of an extra level of technology. But we've seen vein recognition. Amazon one would be an example that incorporates vein recognition as well as other companies have vein recognition. And so, yeah, if I was given a choice, I'd want my vein scanner. Vane recognition. I never even heard of that. Like the Amazon basically no paper, you know, grab your stuff and slap it thing. I know there's wrists and there's also palm vein recognition.

I would think after we just went through with COVID that people would be very hesitant of slapping their hand on a device that a lot of other people are slapping their hand on. And they're non-contact. Yeah. You don't have to touch. Oh, you just put it right over it. Okay, interesting. Okay, I learned something. We were talking before we hit record and you started to mention something about the dead finger problem. And I said, no, save it. I don't want anything about it.

Tell me about the dead finger problem. Well, it connects everything we talked about. So, like back to my origin story, you know, in terms of biometrics and thinking about what to research. You know, one of the companies said to me, well, can you solve the dead finger problem? You know, which essentially the idea is that someone's going to cut off someone's finger and use that, you know, on a fingerprint system. Luckily, you know, that doom's day scenario, that

that person who suggested it, you know, hasn't come to fruition. Though I do know of some actual use cases where that's happened, but it's probably more likely they'll steal the fingerprint and create a replica, you know, made of some material. And so that's kind of what kicked me into the business. And we've been doing spoofing ever since. So, but not so many cases where we're cutting off fingers to test that. Does it work? Or does it not work? Does it depend on the reader?

Does it? Yeah, a fingerprint can be scanned, even if it's dead. And in fact, that is a way to identify someone. And so, so then the question is, is the solution that rejects those fake, you know, biometrics also rejecting dead? And that's use case dependent, you know, whether you think that's a major attack vector for your system as to whether it rejected. And I'm the one who brings up to negative stuff. Okay. More, more of a fascination around it. Let's, let's raise that. I got

another question. I want to ask, is there a way to determine a live finger versus a dead finger? Yeah. You can adjust the finger. Absolutely. There is a way. Much co-impulsive or something. Right. Yes. Yes. Well, so with you go to the doctor's office and they shine that, you know, little red light, you know, there's actually a second line in urine

for red light. And that's looking at the blood flow. So, so there are scanners that, you know, do use multiple wavelengths of light and collect a lot of information, you know, about the finger, you know, on the surface and within ultrasound scanners. Ultrasound, fingerprint scanners have an array of ultrasound. So there is the potential to legitimately determine whether it's a live or dead finger. So I thought for sure you're going to say like smell. I've heard about this. I've

heard about this. So what does it go on like breath smell or body odor or I mean, what if you had garlic for dinner the night? I know these are silly questions, but well, I mean, you know, you know, dogs can trace sense, right? And follow sense. So there isn't silliness there. I would say that, you know, that we've done a little bit of a look on that one. That's probably that's out out there research problem. So we'll keep working on and get back to you. All right. Yeah. We'll have

you back home and that one solved. All right. All right. We'll go ahead and wrap things up. Stephanie, thank you so much for being part of this conversation. Learned a lot. Hopefully, enjoyed your time. We'll come back when we have smell revision or something available for us to use. You can find us on the web IDACpodcast.com, Twitter at IDACpodcast, mastodon at IDACpodcast at infosec.exchange. We'll have links for people to connect with us on

LinkedIn, including Stephanie, Jim and myself. Also a link to the YouTube channel that I reference for CITER. So you can learn more about biometrics from there. Any final thoughts before we wrap it up? I'm happy to be here and happy to answer questions and you know, please reach out to me. Thank you very much. Thanks Stephanie. Jim, I think you're you seem seem here. I'm on LinkedIn. You're on LinkedIn. Connect with us and

keep the conversation going. Yep. All right. We'll leave it there. Thanks everyone for listening and we'll talk with everyone in the next one. You've been listening to Identity at the Center. We hope you've enjoyed the show. Make sure to like, rate and review and we'll be back soon. But in the meantime, hit the website at identityatthecenter.com and find us on Twitter at IDACpodcast. See you next time on Identity at the Center.

This transcript was generated by Metacast using AI and may contain inaccuracies. Learn more about transcripts.