Scammers exploit DeepSeek hype with fake websites and crypto schemes. A researcher jailbreaks OpenAI's O3 mini model, bypassing safety protections. And a woman buys an iPhone using a stolen identity in a London Apple store. The rapid rise of DeepSeek, a new AI model gaining global attention, has attracted cybercriminals eager to cash in on all the hype.
According to cybersecurity researchers, scammers are using fake websites, counterfeit content, Cryptocurrency tokens and malware laced downloads to exploit public interest in the model. One of the most alarming tactics involves fraudulent websites impersonating DeepSeek's official platform. These sites trick users into downloading malicious software disguised as the DeepSeek AI model. Security firm ESET has identified this malware as win32 packed. nsis.
com Dot a, which is digitally signed under the name of K dot my trading transport company limited. Notably the counterfeit site uses a download now button while DeepSeek's legitimate site features a start now button. Meanwhile, scammers have also launched fake DeepSeek branded cryptocurrency tokens across multiple blockchain networks. Some already reaching market caps in the millions DeepSeek has explicitly stated it has not issued any cryptocurrency, making these tokens a clear scam.
Beyond these fraud schemes, DeepSeek itself has faced security challenges. We've done some articles on this. A recent large scale cyber attack forced the company to suspend new user sign ups temporarily. Researchers have uncovered vulnerabilities in DeepSeek's AI models. That could allow attackers to bypass security measures and generate harmful content. DeepSeek has some glaring security flaws. It's been hacked a number of times.
We have this on good authority, but in fairness from the same people, we're hearing the DeepSeek response quickly. When issues are identified, though, there is still to put it kindly room for growth in its overall cybersecurity maturity. For users and businesses interested in DeepSeek, the risks are obvious.
Exercise caution when dealing with any online platform, claiming to offer downloads or investments related to this AI model, we should be remembering this model exists in a totally different jurisdiction with totally different laws, and we have no reason to believe that there's anything malicious in the actual site, but be, For you put any corporate information into a SaaS site in another jurisdiction, you might want to ask yourself about using one
of the local models that have been established or setting up your own model. It is open source. And our colleagues and users need to be told that they need to avoid any DeepSeek branded item, cryptocurrency offerings, particularly as the company has denied any involvement in such projects.
And just as another aside, I went to the app store just to check up on the DeepSeek app, and , if they were really interested in preventing fraud, they would have some labeling by now that indicates what is an official branded app. Every popular brand or app has dozens of lookalikes generated. sometimes the real brand is actually pushed down the list. The one great example of a company that is trying to at least get past this is OpenAI.
They have right on their app, uh, because everybody's using their logo, which again, these stores should be doing more to monitor, but the OpenAI app says clearly this is the official app. If nothing else, is this so hard? And speaking of OpenAI, their latest AI model, O3 mini, which was designed with enhanced security measures to prevent misuse, didn't take long for researchers to break through.
Just days after its release, cyber security expert Eran Shimony successfully bypassed open AI safeguards, demonstrating that even the most advanced AI safety measures remain vulnerable to exploitation. The O3 and O3 mini models introduced on December 20th featured a new security approach, which was called deliberative alignment, which was intended to make AI systems better at reasoning through safety concerns and resisting manipulation.
OpenAI touted this as a breakthrough in making AI models more resistant to harmful requests. However, Shimony, a Principal Vulnerability Researcher at CyberArk, managed to craft prompts that tricked O3mini into providing instructions on exploiting ISAS. exe, a critical Windows security process commonly targeted in credential theft attacks. The incident highlights the ongoing challenge of securing AI models against sophisticated prompt engineering techniques.
While OpenAI's new safeguards mark progress, the ability to jailbreak the system so soon after launch raises questions about how effective these defenses really are. It also underscores the evolving arms race between AI developers trying to enforce safety measures and researchers or malicious actors finding ways to circumvent them. Frankly, I think we'd all rather they were found by the researchers first.
Open AI has not yet publicly addressed the jailbreak, but the discovery serves as a reminder that AI security still remains in its infancy to some extent, or at least a moving target. As models become more powerful, ensuring they cannot be manipulated for malicious purposes will require continuous refinement and rapid response to emerging threats. Thanks to the researchers at ESET for tipping us off to this story.
I have to say I've seen a lot, but I had a real problem figuring out how this story happened. And it's a simple story. I glanced at it and went, wow, I've heard of many different frauds, but this one was new. And I searched and I haven't found another story quite like it. Although In fairness, I just might've missed them. A woman is now wanted by police after allegedly purchasing an iPhone using another person's identity in the Masonville Apple store in London, Ontario.
The fraudulent transaction took place on January 22nd, and local authorities are asking for the public's help in identifying the suspect. They have surveillance footage that shows images of the woman, but the police have not disclosed how she obtained the victim's personal information, or what payment method was used, or how that got by what Apple should have for security measures.
While Apple stores require ID verification for in store pickups and purchases linked to accounts, fraudsters clearly found a way to bypass these protections. And the person was clever enough to do that, but not clever enough to realize she was being recorded on camera. That's our show for today We're continuing to work with law enforcement to get some shows focused on the growth in fraud
. We'll keep you posted . In the meantime, if someone knows how this story happened, let me know at editorial at technewsday. ca. All tips are confidential and all information will be used responsibly. I'm your host, Jim Love. Thanks for listening.