Social Media Fraud Targets Truth Social: Cyber Security Today Weekend with Netcraft's Robert Duncan - Jan 18, 2025 - podcast episode cover

Social Media Fraud Targets Truth Social: Cyber Security Today Weekend with Netcraft's Robert Duncan - Jan 18, 2025

Jan 18, 202526 min
--:--
--:--
Listen in podcast apps:
Metacast
Spotify
Youtube
RSS

Episode description

Addressing Social Media Fraud: Insights from Netcraft's Robert Duncan

In this weekend edition of Cybersecurity Today, host Jim Love discusses the growing issue of fraud in the cybersecurity landscape. Jim interviews Robert Duncan, VP of Product Strategy at Netcraft, who sheds light on their research into fraudulent activities on social media platforms, particularly focusing on Truth Social. The conversation delves into the mechanics of conversational scams, the role of crypto in fraud, and the challenges faced by social media platforms in combating these threats. They also discuss the need for better protective measures and the varying approaches to content moderation across different jurisdictions. Tune in for an in-depth look into the pervasive issue of online fraud and the ongoing efforts to fight it.

00:00 Introduction to Cybersecurity Today
00:21 Interview with Robert Duncan from Netcraft
01:12 Understanding Social Media Scams
06:09 The Role of Crypto in Scams
14:25 Challenges and Responsibilities of Social Media Platforms
23:31 Future Research and Conclusion

Transcript

Welcome to cybersecurity today, the weekend edition. I'm your host, Jim Love. We've talked about the prevalence of fraud as part of the cybersecurity landscape and how fraud is many times the size of other cybercrimes like ransomware that seem to get a lot more attention. And so with this Study crossed my desk. I had to pursue it and find out more about it. So today we have an interview with Robert Duncan. He's the VP of product strategy for a company called Netcraft. Welcome Robert.

Hi. Hi, Jim. How are you? Good. Good. You're in, you're in the UK. That's right. Not today, but Good for you. Yeah. But it can be. Okay. So I wanna check out, tell us a little bit about who Netcraft is. I hadn't heard of you before until this report crossed my desk. Sure. So Netcraft, we are a, a cybersecurity company, focused on combating internet cyber crime. So we kind of two main focuses. One is on detection, the second is on, taking action. So disruption and take down.

We're a global company and we work with governments and companies across the world, trying to make the world safer. Better place to be online. Great. Yeah. So you've been doing searches for fraudulent activity on social media platforms. Does that, and this led you to, think about social media scams and you had a particular focus on truth social, but let's back up just for a second and talk about the overall fraud that you see going on on social media. What is the state of that right now?

Yeah, so it's a pretty interesting field. So it's a big space and there's a lot of variety across different platforms, different mechanisms. But if I can kind of give you a bit of background to how we're operating and how we come across this kind of stuff. We've been thinking about scams. that happen through conversations, for about two years. And they're distinct from lots of other different types of cybercrime that we see.

So if you think about phishing, when you see a phishing website, it's fairly clear cut that something fraudulent is going on if you know where to look. So if you look at the URL, you can see actually that's not what I expect. You can say, well, actually, this probably isn't my bank, if it's, you know, banklogin. 123. xyz, rather than the kind of what you're expecting. The difference here with these, conversational scams is they really only unfold once you start a conversation.

So you don't know necessarily at the start. Some of these scams can really start with very, very simple messages like hi, or you know, is this Steve? And then you're kind of starting into a wrong number scam. They can really start in very innocuous ways, and they work quite differently. You can't go out and find a scam like this in the same way that you could, for example, search the internet to try and find a domain name that looks particularly suspicious.

In this case, the scams, you really have to interact and you really have to be in the right space. The kind of same scam operates through text messages, through emails. It's really a way of talking to somebody one on one and criminals would exploit whichever way they can to get that that kind of level of interaction. And this is a big deal. I mean, I was reading your report.

Your report mentions one in four Americans reported that they've lost money to fraud and that totaled about 2. 7 billion dollars. Is that all from social media? I mean, not all of it's necessarily from social media, different estimates vary. I think those estimates are talking about all online scams, many of which start on social media. But I don't want to pick on any individual social media platform, too much because the problem is everywhere.

It's not necessarily the case that, one platform is the cesspit and everywhere else is clean. It's really quite a challenging space to operate in. But you've particularly pointed out TruthSocial on this one. Well, yeah, in this particular case, we started looking at TruthSocial because it has a few interesting properties that are a little bit different to other platforms. There's really two main thrusts of what we were looking at.

One was this type of conversational scam where you're receiving messages, that are from scammers or criminals and then the kind of scam evolves from there. There's a second element to that, which is, the kind of more traditional phishing and malware type, cybercrime that's also prevalent on the platform. So really we kind of were taking a look at the platform from the perspective of what happens as a new user. So we didn't necessarily go into this looking. For scams.

One of our members of our team started investigating the platform and almost immediately was receiving inbound messages that turned out to be scams. So in the first hour we got. More than 30, messages that were scams. So we know that because we continue the conversations. And so we could then, be confident in saying, you know, this is a scam and we know what happened. We know what the scam was trying to do.

And we can use that, that kind of talks to how our technology works on other platforms, for how we interact with these scams at scale. So, that's a particularly novel thing that we saw on Truth Social, and we think that's because the way that you start, when you're a new user on the platform is a little bit different. So you end up, being asked to join groups and within these groups, when you're added, you can see a list of all the other members.

And so that's kind of fertile ground for a criminal to say, here, I've got a big list of users that I know are interested in topic X. And that's a very good source of being able to then start sending messages. To those users, with the aim of trying to scan them. That's a little bit different to platforms like, X or others where the kind of mechanism is quite different. Yeah, it seems to be, they seem to be set up for that.

Also, with the numbers that you've shown, it does seem to be a population that is ripe for, for the picking for these fraudsters, did you find anything unique about the audience of truth social that might've contributed to this? So one element of this is many of these scams rely on crypto. Crypto is a really great tool, both for legitimate use and for illegitimate use, because once you've made a payment on crypto, there's no going back. It's gone.

And so that's a great tool for criminals because once the payments with the. They're free and dry, they can run away and launder the proceeds, elsewhere. I mean, certainly some conjecture might be that certainly with, the positioning from Trump and others that you might expect a user on Truth Social may be more likely to have already have some familiarity with crypto. Maybe it's something that's less scary for them. Again, that's kind of conjecture from our side.

There's no evidence necessarily to suspect that, and certainly that may be true on other platforms too, but certainly that's one element that might be a factor there. Yeah, and I, you know, I mean, your report says that being truth social, people expect this to be truth on there and that, I think that's pretty, that's a fair expectation. It's not that they didn't go to lies socially, went to truth social. I think a lot of them are going there thinking, I'm going to find out the real story here.

And I think that's a, and I, by the way, I'm not making fun of people on truth social. There's a ripe audience in social media everywhere that is, as P. T. Barnum said, one board every minute. And you can't get into victim shaming here, but, but there are people who are more susceptible to these messages and, and they do tend to aggregate it in social media. I mean, certainly that, yeah, there's some elements of that. I think.

From, from our perspective in the research, I don't think there was necessarily anything different to how the scams operated. We've definitely seen these scam paradigms everywhere. So, I'm fairly confident that the threat actors that are working on TruthSocial are equally happy to work on other platforms. They're kind of going to where their user base is, effectively. We certainly expect that when, cybercrime, it's a business. And the criminals are making decisions based on cost and ROI.

So it's a case of once they've found a platform that works, where they're able to send messages and they get outcomes, that becomes easier to justify more investment into the platform from their perspective. It's very much like a traditional business, certainly from our understanding of how that operates.

And the primary ways that they're doing this cash advances and it was one you talked about in the report and that's, yeah, there's, a whole gamut of different ways of doing this, ranging from gift cards. So a kind of fairly traditional scam involves, asking the victim to send, gift cards from well known stores, to the criminal for them to then order the money out. The second way of doing so is using crypto payments. A third way of doing so is wire transfers.

The way that that's being done is fairly well structured. So there's a kind of Escalation from the easiest things to launder to the hardest things to launder, and that, shows up in the scans that we interact with on other platforms as well.

I'm kind of thinking about the general picture of conversational scans that we see, this is kind of how it escalates in that order when a wire transfer happens, that's fairly high risk for a criminal, whereas gift cards and crypto, they're very low risk in terms of the consequences of being captured or caught. The payment being stopped is kind of fairly low. They just don't get the payment. There's no additional. Consequence to those.

And so, and I guess being social media, the conversational aspect of this is really the root of it. I start talking to you, I get to know you, then we start to introduce the scam slowly. I think one of the pig butchering scams that I, I just, and I've been. You know, I've been pitched this before as well, where somebody will come back to you and say, there's going to be great returns in, in this, you know, so just invest a little bit in this crypto and, and then they'll show you great returns.

I didn't. Go into it, but I know people would be attracted for that. You know, you put down a small, relatively small amount in there and then you see a return, woo, I made this much money and invest more and more. This extends to, I was talking to someone in police and they, this extends to. A lot of money. I mean, some of these people are taken for a substantial amounts. I find it just a stunning, sometimes hundreds of thousands. Some of them lose their entire retirement savings.

Yep. Operating in this way. We're kind of operating with these conversations. You tend to only see the first payment request. You don't see those because we never make, we're not making the payments. So we don't see payment 2, payment 3, payment 4, payment 5, where you're building to these really big numbers, but you certainly see the start, the seeds of this.

So part of the kind of threat intel that we can extract from this things like these websites, these fake investment platforms that are being used for these pig butchering scams, kind of comes out of the woodwork, through the conversation. In many cases, they can be sites that you wouldn't necessarily be able to find if you were starting from scratch. Sometimes they have usernames and passwords that get shared in these conversations that you. Aren't able to predict.

So if you came across the website as a, cyber security company, or, a good system trying to find, bad stuff online, sometimes you can't find them because they're behind passwords and only the victims have access. So only the victims have access to the site. And then that makes it very hard. To find these, except through mechanisms by having conversations and being able to run these scams through that's kind of where there's a pretty interesting use case there.

So these sites are hiding behind pretty good security. At least they've got good passwords. Well, some of them, yes, some of them, no. So there's a mix and we certainly see that mix through other types of cyber crime. Like thinking about phishing, you can really go from. A threat actor who has no technical experience at all has copied a fishing kit from somewhere or purchased a fishing kit, pushes a button and they, you know, they're off. It's, it's that easy.

The downsides of that approach is that obviously those of lower technical sophistication, they're, they're easier to. detect easier to find easier to decide that those are bad that can really go to very sophisticated actors who are spearfishing they're sending a handful of emails to a handful of selected people that they've worked very hard to research and they can provide very sophisticated Resources to that's kind of the same thing. Are you seeing any, any real use of deep fakes?

We hear a lot about it. Are you encountering a lot of that in your research? That will vary. So most of the research we're doing is, Through, messages. So we're sending and receiving messages.

So there will be images in there that are either, deep fakes or they've been modified in some way, there definitely are cases of, you know, I've seen certainly in, industry research and media, some pretty sophisticated examples of deep fakes, like I'm sure, that many have heard of the incident in Hong Kong where somebody was tricked into making very large corporate payments through deep faked video.

That's certainly something that exists, something that we certainly didn't see in the context of this, this research here on, on Truth Social. Not to say that it doesn't exist, but certainly it's not something we came across. I saw a story yesterday and sometimes you just got to give yourself a shake and wonder how people believe these, but there was a person who was taken in by deep fakes of Brad Pitt thinking that he had a relationship with her.

I mean, these are lonely people who get taken advantage of. Doesn't social media have a responsibility to do more on this? I mean, that's a, it's pretty easy to say yes, that there's lots of opportunities to improve. When you get into the detail, it can become a lot blurrier about where that line is between impersonation and where that line is between, parody. Many of these platforms, when the peer to peer communication is encrypted.

The platform themselves don't actually know what's being said between these two parties. And so it becomes a much more challenging environment to think about how, you know, who should do what. It becomes a lot more challenging to say, well, okay, the provider should stop that. It's a lot harder than that. I hear you, but I'm going to challenge on that. I've got Google email and it's encrypted from my browser to whoever goes there and yet Google will warn me.

To say, Hey, this person isn't in your network, or are you sure you really want to say this? This is something that these platforms could implement, but they just don't seem to want to. I think there's also a big difference in per jurisdiction behavior. Certainly if you read the news media, many of these large platforms have different behavior in different jurisdictions, and that's in response to. Government legislation that, directs these providers to act in a certain way.

So, I certainly think that there's precedent for having different or more stringent approaches in different geos. So, for example, in Australia, they're talking about Banning under 16s from social media, and that's something that many of these platforms, if they want to have Australian business will be forced to implement. And I would expect that that would be functionality that they only deploy in Australia because that's where they've been asked to do that.

And certainly the same thing is true in the UK, the US, EU, there's different frameworks in place and certainly some platforms, have very different approaches to these. I mean, I'm not the right person to comment on platform policy, but certainly there's a big range in behavior. Yeah. I know Mark Zuckerberg famously dropped moderation in the U S cause they don't. I don't need, I'm sorry, I'll be, I'm too cynical about this.

I, my trust of Mark Zuckerberg's motives is, I must admit, not, not great. There's actually something interesting in that, announcement, that there was going to be, more focus on fraud. So that's, a double edged sword potentially. So there's a difference, I guess, between content moderation on political topics and fraud. Fraud is quite different. And there's, I think, precedent for that being kind of a carve out for potentially those kind of changes in policy. They're good to point that out.

Yeah, I'm, like I said, I'm quite critical of Zuckerberg just because, but I noted that the point was that although they dropped the moderation in the U. S., Brazil jumped in and said, not a chance. And they're saying, well, we're only going to drop the moderation in the U. S. for now. And I think European regulation. Tends to be a little more strict than than that in the U. S. As well.

So they can adapt if if I guess maybe if they're not doing this voluntarily, maybe the way to go is regulation to say you've got to have some sort of protection for people. I get upset about this because, like I said, I hear about these stories and like we do Programs with law enforcement as well. And when you see somebody who's lost their entire retirement savings, you could say you're silly or you're stupid or whatever, but that's not fair.

These people, some of them are taken advantage of and lose everything they have. A company may be able to recover even from paying something like this or a loss or maybe insured, but most of these individual people on social media don't have that protection. It's very important not to victim shame. I think there's lots of circumstances where even experts can be tricked. Certainly in many cases I've seen that be true. So social engineering works because people are people. People are built.

Maybe truth social is actually a good, example here because people are thinking, well, I'm looking for truths and they're thinking about it from Yeah. Maybe. Open, open frame of mind that they default to trust people and that behavior is unfortunately quite dangerous on the internet, default trust is probably a natural human disposition, but it's actually quite dangerous on some platforms in some, some parts of the internet.

Yep. Your, your report also pointed out a lot of brand impersonation going on as well. And one of the things was, and we've all seen these sites where you pretend to be a brand and. You know, and log people in and what they probably don't notice is any, any password in any mail will work. They don't, they don't probably check the password, but then they try and get your banking information. Pretty classic scam is the, is this something you're seeing more and more of, or is this, is it growing?

Is it the same? The Truth Social aspect to this is quite interesting. So yeah, that type of. Brand impersonation and phishing and other types of attacks is not shrinking, so that is not going away. The use here I think of Truth Social is quite interesting because, in essence it's being used as a tool to hide the real destination of a link. So the use case we've seen is that there's certainly one threat we've been tracking.

Likely, French speaking, probably in France, and they are using TruthSocial as a way to disguise the destination of a link. So they'll send out a phishing email, you know, the standard security advice is to hover over the link to see where it goes, and in this case, you'll see, TruthSocial in there. You won't see the destination site. Because they've used TrueSocial to, redirect visitors from the link through to the phishing attack.

The ability to kind of hide that redirect through TrueSocial is, not something that's necessarily unique to the platform. Many of the platforms have these kind of link, link shorteners. I think what's, different here is that. The behavior is quite, quite well adapted to hiding what is happening based on how it operates. If you're using a link shorten on LinkedIn for instance, it warns you, you're leaving the platform, are you saying that Truth Social doesn't do that?

No, I mean, many, many of them, you know, there's many tools that work in the same way. The difference here is that it's fairly, obvious and transparent. If you look at that particular threat actors profile, they've just got a big, big list of malicious URLs in their, in their profile. And it's. a pretty effective way of hiding the destination. So many of the, particularly there's some technical details about how that redirect happens that make it quite hard.

For, you know, the recipients of those mails to work out what's going on in other cases, some of those redirects can be followed automatically, and you can kind of get to the end site and know where that's going to go for two social. That's a lot harder because you need to be using a specific set of browsers configuration to be able to actually follow those links. So most email clients won't be able to follow those links.

So you click advisor and you kind of, you're off to the Phish site and you didn't really, have any expectation that would happen. Is there more that companies could be doing to, to protect people from brand impersonations? I mean, certainly we would expect that platforms like this that are public facing and allow. User generated content have some mechanism of disrupting fraud.

So that can either be through mechanisms where security companies like ours can alert them to that being the case, or in other cases, they can be proactive and use tools to detect these types of threats and stop them before the victim is able to find that functionality.

So there's, there definitely are things that can be done and other platforms, I'm aware of do think very carefully about how they, think about linking out to external websites, either, like you said, going through a warning page saying, actually, you're leaving the platform, do you kind of know that you're doing that? And do you want to do that? In other cases that I'm aware of other platforms also use the same techniques to be able to, Automatically kind of block bad, bad outbound links.

Wow. So this is a pretty interesting piece of research and we'll, we'll put a link to it in our show notes. What's next for you guys? What are you, what are you looking at next? Good question. We've got, we've got five or six different topics coming next. So. We're still thinking about threats just like these. So we're thinking about conversational scams. We're thinking about phishing. We're thinking about malware. There's, there's lots coming up. We definitely don't, don't see this stopping.

Despite the good work that companies like ours and, and, and others are doing in this space, it's, a never ending problem that it's going to be hard to, to get rid of. There, there, you know, crime exists in the physical world. Crime exists in the, in the electronic world. Digital world too. And I think it's going to be a battle that we have to keep fighting and we're pleased to do that. But it's great to bring attention to it.

Like I said, when we started this conversation out, we tend to focus on ransomware. We tend to focus on technical threats. Those make the news. And behind fraud is the big deal. Yeah. Like certainly I remember some stats from the UK that, you know, more than half of all crime that's reported is fraud in some way. And in many cases, it, unfortunately, The policing funding, I don't know what it's like in the UK. In Canada, it's terrible. The, the, there's not enough funding going into this area.

You can hope, you know, because street crime and things like that take precedence and other things take precedence. And these are. The people who do this work are thought of as administrative and yet, you know, they are anything but, they are active, you know, feet on the internet street of trying to prevent real crime. So bringing attention to this is a good thing. Thank you so much for doing that. Love to have you back when you get your next report out. So, so ping me and let me know.

And as I said, I'll put a link to this report in the, in our show notes for everybody. And thanks for coming by. My guest has been Robert Duncan, VP product strategy with Netcraft. And I'll, as I said, I'll put a link in the report. I'll put a link in the show notes to the report. Thanks again, Robert. Thanks, Jim. And thank you for tuning in and spending this time with us. We always love to hear your comments. You can reach me at editorial at technewsday.

ca. And if you're one of our growing audience on YouTube, you can give us a comment there. I check them regularly and try to respond to everybody. I'm your host, Jim Love. Thanks for listening.

Transcript source: Provided by creator in RSS feed: download file
For the best experience, listen in Metacast app for iOS or Android
Open in Metacast