Sneaky 2FA, a new phishing as a service attack defeats two factor authentication. A scammed company is ordered to pay 190, 000, even though the email that scammed them was legit and AI powered romance scams are exploiting deep fake technology. This is Cybersecurity Today. I'm your host, Jim Love. A phishing kit called Sneaky 2FA is exposing critical vulnerabilities in two factor authentication or 2FA defenses, making it a serious threat to Microsoft 365 users.
The adversary in the middle kit doesn't just steal credentials, it captures 2FA codes and session cookies in real time, giving attackers full account access without raising red flags. Victims are lured to fake login pages hosted on compromised WordPress sites. And these pages look authentic, often prefilled with email addresses to lower suspicion, and they employ cloudflare's turnstile to distinguish humans from bots complicating analysis by researchers.
The Attack kit's code has been linked to W3LL Panel OV6, another sophisticated phishing tool, highlighting the modular, service driven nature of modern cybercrime. What makes Sneaky2FA stand out, though, is its seamless operation. From luring users with realistic URLs to leveraging session cookies for immediate authentication bypass, it does it all. For enterprises, this attack underscores the limitation of traditional 2FA.
Security teams should consider upgrading to phishing resistant authentication methods like hardware security keys or WebAuth. Monitoring for unusual account behavior such as logins from unrecognized devices or geographies can also help detect compromised accounts before further damage occurs. A Western Australian court has ruled that a company must pay for failing to properly verify a payment change, even though it was deceived by hackers.
And the instructions came from a legitimate email address that hackers had compromised. , In 2022 attackers compromised Mobius Group's email system and sent fraudulent payment instructions to Innotech Property Ltd. . InnoTech attempted to verify the change, but relied on a single phone call which didn't connect and fake documentation that was provided by the scammers on a legitimate email from the company. By the time Mobius followed up, most of the 190, 000 was already gone.
Judge Gary Massey's ruling is a wake up call for business. He noted that Innotech's verification process fell short of reasonable due diligence, stating a failed phone call should have prompted a more robust process. This decision highlights the importance of redundancy in payment verification protocols. False billing scams are surging. Australia reported 40, 000 cases in 2023, a stark rise compared to previous years.
And although this happened in Australia, and the majority of our listeners are in Canada and the U. S., courts often look to other jurisdictions when there are no precedents in their own country. And even without a lawsuit, the lesson here for businesses is clear. Implement layered authentication for payment changes, require approvals from multiple parties, and document verification steps thoroughly.
Additionally, updating contract terms to include secure payment protocols could help to reduce exposure. AI driven scams are now using cutting edge tools to deceive victims, and the stakes are high. A French woman recently lost 180, 000 in a scam involving deep fake videos and AI generated voices mimicking actor Brad Pitt. While celebrity impersonations are rare, they highlight how accessible AI has made such sophisticated attacks.
Romance scams Preying on the lonely contributed to 1. 3 billion in global losses last year, according to the Federal Trade Commission. But most of these scams involve more mundane scenarios. Fraudsters posing as relatives in emergencies or professionals in urgent need of financial help. AI tools enable these scammers to create believable interactions, from real time voice synthesis to highly realistic fake video calls. For law enforcement, this trend raises key challenges.
The decentralized and cross border nature of these scams complicates enforcement, while the rapid evolution of AI lowers the technical barriers for bad actors. Organizations should focus on educating employees and users about these risks, especially in industries like banking and social media where trust based fraud is prevalent. And even though these scams are not classically corporate in nature, compromised individuals who lose all they have can represent a corporate security threat.
And as professionals, we also have an obligation to help inform those at most risk. And for those worried about similar techniques working their way into the corporate world, consider implementing AI detection tools to flag suspicious videos or voices and emphasize the importance of critical verification steps, even in seemingly urgent situations. The growing use of AI should drive a re evaluation of fraud detection tools and frameworks to keep pace with these evolving threats.
And that's our show for today. You can reach me with tips, comments, or questions at editorial at technewsday. ca. I'm your host, Jim Love. Thanks for listening.