Whether you're starting or scaling your company's security program, demonstrating top notch security practices and establishing trust is more important than ever. Vanta automates compliance for Soc2, ISO 27,001 and more, saving you time and money while helping you build customer trust. Plus, you can streamline security reviews by automating questionnaires and demonstrating your security posture with a customer facing trust center, all
powered by advanced AI. Over 7000 global companies like Atlassian, Flow Health and Quora use Vanta to manage risk and prove security in real time. Get $1,000 off Vanta when you go to Vanta comm slash unsupervised. That's vanta.com/supervised for $1,000 off. Welcome to Unsupervised Learning, a security, AI, and meaning focused podcast that looks at how best to thrive as humans in a post AI world. It combines original ideas, analysis and mental models to bring not just the news,
but why it matters and how to respond. All right, welcome to unsupervised learning. This is Daniel Miessler. Okay. Pretty much heads down on doing talks and courses right now. Uh, a bunch of essays, a bunch of video content. It feels like I've got a lot of ideas but feels bad to be behind. Matt Williams put out a quality introduction to fabric on his YouTube channel, so that was cool.
Really well done. Video got the augmented course is updated. Uh, so essentially we're expanding it to four plus hours where it was only three hours before. So it's a lot more content. Got a whole section on augmenting AI with personal context, building your own work life workflows, which is going to be super cool. We're actually going to do it live with 1 or 2 people in the class as well, so it's going to be kind of hands on.
We got a full section on obsidian as well, and uh, yeah, just a whole bunch of fabric uses lots and lots of examples. And, uh, I really like the cohesive way. It's kind of pulling together the philosophical versus the technical. So it's very practical in like, okay, you actually do this, but it's kind of framed in this philosophical way, or at least that's what I'm trying to pull off. Okay,
so I think I cracked Trump's popularity. I think unless the DNC figures this out, it actually doesn't matter who they run, they have to figure this out. So I got a post on that. Not going to go into that because that's politics stories. So there's a new zero day in OpenSSH that allows remote code execution. It's a little bit convoluted to sort of attack, and I think people are figuring that out. One thing to keep in mind, though,
is it always gets easier to attack these things. It never gets harder to attack these things unless people patch, of course. But even if it's a little convoluted now, I mean, you still really need to take a look at what you have exposed and whether or not your particular SSH stack is vulnerable. There's a full 10.0 critical vulnerability in juniper network routers that basically allows you to
bypass authentic full control. Cvss 10.0 snowflake had a breach, as everyone knows, and it's expanding with over now 165 victims, including Ticketek and Advanced Auto Parts. And some folks from the Shiny Hunters are saying that they access snowflake via third party contractors. And as part of that snowflake incident, Santander's US branch is notifying over 12,000 people that their personal info is stolen. So this is one of those things where it's just, like, contagious because the third party
nature of it, it just keeps spreading. And yeah, like we said above 165 victims currently read Juliet, a Chinese state sponsored group, has been exploiting network Edge devices to target Taiwanese government, academic, technology and diplomatic organizations thanks to
tines for sponsoring. And if everyone remembers that orange R1 I device, well, basically you can you can extract or it was possible to extract all responses that ever came back from them and uh, yeah, it basically nightmare fuel and anything you got back from your personal AI device visible to whoever. I think this is exactly what most security experts predicted with regard to AI and security. Specifically that when startups do security, it's usually really, really bad.
But one, they don't have the expertise, they don't have the resources, they don't have the time. And they're already facing existential crises like every day. And security usually isn't one of them. Another way to say that is startups generally run with scissors, and AI startups run extra fast with extra scissors. Like I've been saying, if you think
this is bad, wait. Wait until it's actually days that are getting compromised where people have uploaded like their traumas and their journals and their personal conversations and just everything, their most intimate details. And when those startups start getting breached, it's going to be way worse. And this is the attack surface map that I put together a while back,
and I think it's still pretty useful. Russian hacking group Apt29, also known as Cozy Bear, breached team viewers corporate IT environment. And this is another. So I've migrated over the years to a very simple stance on security tooling or really any core tooling. It is use the official offerings from big companies whenever possible, and that's because they have giant security teams, they have giant security budgets, and they have a lot to lose in terms of PR and market share.
So basically, I only want to trust my data to companies that have both the incentive and the resources to protect that data. And those tend to be the big players like Microsoft, Google, Apple, whatever. Chinese hackers are using ransomware as a cover for cyber espionage. Perplexity AI is under fire by a lot of people. A lot of people are really upset with them because they're essentially scraping and crawling and, you know, basically feeding their their AI
with tactics that nobody really likes. And it's kind of turning people off to it. Metaculus is launching a series of quarterly tournaments to benchmark AI forecasting against human forecasting on real world questions. So I am really obsessed with this rigorous predictions basically. So there are groups metaculus is one where people make specific predictions. And I learned about
this from this book. Superforecasting. And this is now very similar to Superforecasting, except for it's AI players playing addition to the human players. And specifically they're competing. So I can't wait to watch this. It's really, really exciting to me. I'm actually going to build myself a little intelligence, uh, daily, daily intelligence, brief product, uh, using substrate and, um, a
bunch of the AI stuff that I'm building. So I'm essentially going to be capturing a whole bunch of, of these superforecasters combined with a whole bunch of point sources like Osint people, national security people, financial information, people capturing all their point sources, capturing all the experts and what they're predicting, and then having my AI basically collect all that together, turn it into stories and narratives, and most importantly,
put like based on these experts, what are the most likely outcomes over the next six months, 18 months, whatever, three years. And it won't be perfect, obviously. I mean, first of all, I is good at, you know, building narratives when they don't exist or whatever. So there's all sorts of things I need to be careful of, but with lots of really, really good input from these different sources, I think there's a lot of potential here. Uh, first of all, I can have it. I'll have the history
of these things. So as the AI gets better, or as I write a better prompt for the eye or set of prompts for the eye pipeline, all those results will get better. And either way, I'm going to have the results of the the point predictions and the expert predictions. They'll all be stored, so I can always go retroactively and build a better product. And people are talking about how to run billion parameter scale llms on 13W of power,
which is 50 times more efficient. And this is what I call slack in the rope, which is what Leopold Aschenbrenner calls hobbling. And this is why I think we're at like 1% of like where we are going. And that might be way too large, actually. It might be like 0.001%, who knows? But to me the game is scale times algorithms, times tricks. So improve scale, improve the algorithms and find tricks that magnify both of these. So tricks are finding slack on the rope, which can potentially
massively improve the algorithms or advantages from scale. So these two get magnified by this one. And Leopold is basically calling this one removing hobbling. Yeah. And by the way situational awareness by Leopold. It is like the best discussion of this particular topic of like why you should believe that we're going to scale in a certain pace. Businesses are desperate for AI guidance, and big consulting firms are stepping in to help. McKinsey says generative AI will be 40%
of its business this year. In 2024, 40% of McKinsey's business is AI, and this basically started like three days ago or 18 months ago. I mean, it's a blink of an eye and it's almost half of their business. And so here's a question how much of their business is crypto related? Okay. If you're trying to compare, you're trying to be like, oh, they're both hype. Uh, huge difference. Alibaba's coin models take the top three spots on hugging face.
And a lot of us competitors are lagging behind. And this new leaderboard is testing models on tasks like solving 100 word murder mysteries and high school math equations. I love the more practical and real and not trackable or hackable or cheat able. These benchmarks are, and I don't like the fact that these Chinese models are doing so well. I think it's disturbing. AI and drone tech are two
places we absolutely need to be beat. China. People in high income democracies are increasingly satisfied with how democracy dissatisfied with how democracy is working. Since 2021, satisfaction has dropped significantly in countries like Canada, Germany, Greece, South Korea, the UK, the US and this is fine. A study showed that loneliness in midlife is linked to believing in conspiracy theories. And if I design an education curriculum, one of the main themes will be hard work is leads you to
an easy life. Laziness leads you to a hard life and the concept of resilience. And honestly, I would focus a lot on the Stoics, but let me just pull it up. Yeah, I love this graphic. Absolutely love this graphic. It's just great. I'm gonna zoom in, look at this, make hard decisions. Really, this kind of means discipline, right?
If you climb this mountain, you're you're doing self-discipline. And you get to an easy life, easy decisions like, okay, we're watching Netflix, we're doing cannabis, and boom, you slide down here and now you're so far away from an easy life. Now you have a hard life. Really powerful. Okay, discovery project Nap time, Google's new AI framework for vulnerability research lets humans take regular naps while it mimics human security researchers. So it's just going to go off and
do its thing. The human could go away and it's working on its whole thing. These frameworks just keep getting better. So remember when we saw Will Smith eating the spaghetti and it was like rah rah, rah. His mouth was like giant. It was like totally messed up. The same thing is going to happen with hacking frameworks, but there's going to be a place at the top. Top 5%,
top 10%, top 1%. Depends where you cut it. But either way, it's a small percentage, but still quite a bit of room that basically only the really, really advanced human testers can do right. If you find the top 1% of pen testers bug bounty people or say, the 1% or 1%, right? Which is still a lot of people, keep in mind, 1% of 1% is still a lot of people in in a very large space. Okay. And bug bounty is pretty small, but pen testers are much,
much larger. But either way, let's just call it manual testers. 1% of 1% of manual testers. They are doing things that automation can't really do, and most manual testers can't really do, and it's going to take a very long time. I don't know how long maybe it's going to take full AGI and possibly ASI and a whole lot more tricks or anti hobbling in terms of the tool sets to
be able to replicate what they do. but the other 90 to 90 5% or 99% or whatever it is that is work that an average manual tester is doing. Those frameworks will these frameworks will be able to copy that very soon, I would say, in the next couple of years, even now and even next year. And, you know, it's just kind of spinning up. So imagine manual testing massively being attacked. But does that mean it could do everything that a really advanced attacker can do? No. And
that won't happen for quite some time. And the final thing I say on this is these frameworks will be used by all attackers and defenders, because you'll have to. And the window between new vulnerabilities and either exploitation or mitigation will shorten dramatically. So basically when everyone's running these tools and they're constantly going, and the moment you have a new name published, it's instantly going to go find all the subdomains. It's instantly going to go find all
the hosts. It's going to look at the hosts, it's going to fingerprint them. It's going to do that. And the defender needs to be doing that because the attacker is going to be doing that right. And if there's something vulnerable, oh, it's an open Postgres or whatever it is and there's data in there. Well that is just going to kick off an agent framework. It's going to go download the stuff, it's going to parse the stuff.
It's going to turn it into a ransomware email. It's going to find the people it should send that ransomware email to. And like all this is just going to be automated with AI. And so the defenders have to be doing the exact same thing so they can block
it and do it beforehand. And importantly, when a new vuln pops up or a new attack surface pops up, the time between it coming available and either defense moving on it, or attacker moving on, it is going to become, you know, minutes or seconds instead of hours or days or weeks or years. Extending Burp Suite for fun and Profit a guide by Federico Dota 11 labs text, audio. They've launched a new iOS app that sounds really good. I mean, it sounds exactly like real people. I can't
tell the difference. Claud projects new feature in Claude. That's Anthropic's answer to OpenAI assistance DApp, your new platform where publishers set a price for using their content in model training. Kind of like selling your medical data or something. A Better Paradise Absurd ventures new podcast looks to elevate a fictional episodic series with a billionaire leading the world towards a digital dystopia. I actually want to go listen to
this and recommendation of the week. As soon as you get a chance, go for a ride in a Waymo in San Francisco. It is. It's open to everyone. Now you basically just go get the app and you pay for it or whatever. But it used to be a closed like alpha or beta, but it is a remarkable experience. And what I want you to do when you're in there is watch the screen in the vehicle and look at all the dozens or hundreds of things that it is tracking. So you will see the dog across the street,
you will see the bicyclist. You will see the bicyclists, multiple bicyclists moving in different directions. You will see people on the side of the road. You will see when they cross over into the street. And what you realize is like, that's a lot of stuff to be tracking all at once. And then you realize how distractible you are as a human. You realize how distractible most drivers
are as humans. You realize the statistics of how many bicyclists get hit constantly, every single year and, you know, either injured or sometimes killed. And the reason isn't like evil drivers. The reason is humans are bad drivers. I mean, there's going to come a point at some point in the future where it's like, you mean you really just had people and they were manually controlling these cars right next to pedestrians and right next to bicyclists? Like, how
were they watching everything? Well, well, the idea is the human driver would look forward and they would just watch everything. It's like, yeah, yeah, but but they can't see behind them. Well, you just you just look behind you. That's all you do. You just look behind you. Well, yeah, but then you're not looking forward. Well, well yeah. But but when you need to look forward, you just turn around again. And then you look forward and you can look side to side.
It worked, it worked. It worked for a while. It's like explaining that to somebody who has who's been driven around in automated vehicles that watch everything all the time and never blink, never get tired, never get sleepy. Never check text messages. They just watch everything all the time and can instantly, like swerve the car, stop the car, do whatever. If someone does something stupid on a bike, they hit a pothole. They fall in the road in
front of you. Like what are the chances? You're just going to miss that because it's dark, or because you're tired, or because you've been working three jobs and you're falling asleep or whatever the reason. So think about that when you're looking at the screen in a Waymo and the aphorism of the week, every event has two handles, one by which it can be carried and one by which it can't. Every event has two handles, one by which it can be carried and one by which it can't. Epictetus.
Unsupervised learning is produced and edited by Daniel Miessler on a Neumann U87 AI microphone using Hindenburg. Intro and outro music is by Zomby with the Y, and to get the text and links from this episode, sign up for the newsletter version of the show at Daniel miessler.com/newsletter. We'll see you next time.