Unsupervised Learning is a podcast about trends and ideas in cybersecurity, national security, AI, technology and society, and how best to upgrade ourselves to be ready for what's coming. All right, welcome to Unsupervised Learning is Daniel and episode 471. Hope
your week is going well. Updates on my side want to publicly shame myself here, so I basically hack and write for most of the day for work, and pretty much every single meal I have at some fast food restaurant, which is just unbelievable for somebody who is supposed to be smart. It is really bad for your health to do this, and it's really pure laziness and also just intimidation of like how to do this cooking thing properly and basically go and eat in the car and come
back and get right back into working. Um, but I Must start cooking at home. Now, I put this in the newsletter and the newsletter went out before you're hearing this. As you probably know, and, uh, I got close to 500 email responses. It was absolutely overwhelming. It is. Hundreds of people reaching out already bought my pressure cooker or what is it? Instant pot already bought that thing because everyone pointed me to that being like the best way to do things. And, uh, got a specific one, which
seemed to be like one of the most popular ones. Also, it was on Wirecutter, I think, as well. But, uh, yeah, I'm going to try to do the eggs thing. I'm going to try to do the steak thing and, uh, really appreciate all of you who responded to help me out. It sounds like a lot of us have been going through that and have figured it out to some degree. So really appreciate all the feedback. If you're into cloud security, you need to be following my buddy Rich mogul's cloud
security lab a week. So cloud slaw is one of the absolute best cloud security people on Earth. And he's putting out really high quality labs for free, like every week. And he's like an EMT and he's like a really great guy. He's just fantastic. So definitely recommend you check him out. Cloud slaw. Rich Mogull and AI's ultimate use case transition from current to desired state. I'm going to do a separate video about this. So I'm not going
to talk about this blog post. Cybersecurity. A lot of confusion about what's happening in the Trump administration supposedly standing down U.S. cyber operations around Russia. So stories first say it's like government wide, and then it's like, no, it's not really all the government. It's only certain departments. These things are happening so fast. You and you don't know the quality of the signal that we're getting from these
different sources. So it's like I tend to like be waiting and just seeing what actually turns out to be the case because something could be hyper overblown. It's just really hard to track things right now because it's so much chaos going on. And I've got a lot to say, though, about the general vibe of the administration seeming to move towards Russia and away from trusting its own Intel people. So that is a really huge problem for me, which
I'm going to talk about more below. VMware is releasing emergency patches for three actively exploited flaws in ESXi workstation and fusion new polyglot malware campaign called Suzano, targeting aviation and satellite companies in the UAE. Hirsch IoT intercoms are letting anyone with Google access unlock apartment doors due to known default credentials. So this is like somebody could use an app to, like, break into apartments who are protected
by this particular type of IoT. When IoT was a big thing, like the talk about IoT was a big thing. When was that? Was that like 2012? 2015? I feel like it was around that area. And I did a lot of press at that time talking about it. And I was just like, I think this is kind of overblown because what people don't realize about this is you have to properly threat model. So you have to look at the Venn diagram of people who want to break into places and steal things and are willing to break
the law in order to do that. That's one circle. The other circle is people who don't know how to do it. They are unable to get into the building. It's like too difficult. The locks are stopping them. The technology is stopping them. The security is stopping them. It's just a huge problem for them, right? They're not able to do it. Now, what does this look like for
a Venn diagram? Now, to illuminate this more, think about what this particular vulnerability is actually going to do to these two, then to these two circles in the Venn. I think the answer is it's not going to add too many people to the ones who are actually willing to do the damage. And I don't think it's ultimately going to result in too many more people getting broken into. So, in other words, the people who want to break into apartment buildings, the people who want to break into, uh,
people's homes, people's apartments. I don't think they are limited by not being able to get in. I think locks are too easy to pick and they can get by. Now, the question is, are there cameras that actually record them doing it? And here's a better question. The cameras aren't going away. So is the disincentive because of the cameras? Isn't that still there still stopping a person from potentially doing this. So I think people will see something like
this and they're like, oh, that's game over. No, you have to threat model. This is why threat modeling is like one of the most important skill sets that you could learn for life in general. I'm very thankful for my security education and, you know, all the workshops and tabletops and everything I've done with threat modeling because it's a universal life skill, because a lot of things are emotionally charged. I'm moving to this new city. Oh, it's supposedly, uh,
supposedly a dangerous area to be. Um, it's, you know, there's been x number of crimes in this area. There's X number of car crashes with this particular type of car. When you actually gather the data and throw together how you plan on using this thing, or how you plan on attackers using this thing or whatever, and you start
moving through the scenarios that you're worried about. Usually you end up realizing the things you are emotionally worried about are not the ones, and oftentimes the things you're not emotionally worried about actually are the ones. So this is a big exercise in, you know, the work of Kahneman where he realized, you know, bias and mental models and perception is actually far more important to how we see
the world than actual reality. And that really needs to be applied when you're thinking about things like this in, in cyber or other types of security. Brute forcing attackers are targeting thousands of ISP networks in the US and China to deploy stealers that grab crypto wallets and mine Monero on compromised systems. Meta terminated 20 employees for sharing internal info outside the company after recent stories about unannounced
products and meetings. Really cool wired thing here. It's a wired video shows how Tesla vehicles record everything from driver behavior to biometric data. Ask some pretty smart privacy questions about this whole system. And yeah, it was a great video. I watched the whole thing and a lot of it
is not alarming. A lot. Some of the stuff towards the end, the end is like really where the meat of this thing is basically talking about how there's been some cases of things being abused, like recordings from inside of cars and stuff like that. I don't think it's widespread. Otherwise we would have heard a lot more about it, but it's something to keep in mind. It's especially something to keep in mind when you're talking about like BYD, which is the main Tesla competitor, and they're likely to
be here in the US soon, I'm guessing. And it's a pure Chinese company. Okay, so if they have the same sort of monitoring capability and infrastructure that Tesla does, but it's a Chinese company, whole different game for me.
I've not been super worried about all this surveillance stuff inside of consumer tech, uh, because of how much I trust or don't trust the products, where I have cameras, where I have microphones in my house or in my car, and which companies I'm allowing to actually be part of
that network. Right. So I trust Apple very highly, uh, not only because I worked there for a few years in the actual security group, but just because it's pretty clear how much importance they're putting on privacy and security. And I know that is confirmed for me personally because I've been on the inside. That's not the case for Google. For me, it's not the case for Amazon. For me, it's not the case for meta, necessarily. For me, not
as much as Apple. And it's damn sure not the case for some Chinese company like TikTok or, or, uh, you know, ByteDance or whoever it is, or BYD when those cars come. So you got to think about who you're sending this stuff actually to. What are their long term goals? National security. China's research output on next gen chipmaking tech is now double that of the US, which suggests that the bands are actually forcing them to get
smarter and more. Innovative. Innovative. That's a word. Innovative. So basically, I was trying to say innovative. I was trying to say like the British version of that word. Anyway, it didn't work, but we could be forcing innovation from China, where they were relying before on theft and relying before on just kind of like buying other people's good stuff. So we might be getting the exact opposite of what
we planned on getting. We might be waking them up, which is which is really interesting to think about when you think about economics and like stimulus and response, but also quite concerning. Tulsi Gabbard, new director of national intelligence, is firing more than 100 intelligence officers for sharing explicit messages on an NSA internal platform. But I'm calling shenanigans
on this. A number of my trans friends who know a lot about intelligence and cyber are saying this is 100% an attempt to remove trans people from government, and they're using like this explicit content thing as like the cover story for that. It looks like the messages in question were like about getting like bottom surgery. Similar topics. Kind of like women talking about post-pregnancy body issues or whatever. So it could be considered explicit or personal or gross
or whatever in certain lights. But that's all part of being human, right? So the question is, were these the right forums to talk about such things? Was it like egregious or was it like partial? And I didn't see the messages. I don't know the forum they were doing this in. I actually did get a reach out from someone in the IC community, actually, kind of like on the inside around here around the topic, who actually did
see all this stuff. And they said it was pretty egregious and it was pretty bad and I don't know. So first of all, they could be lying. Second of all, I don't think they were, but they could be lying. It's also a single data point. It's a single opinion. The point is, there's kind of two obvious ways to go here. Like, it was really bad, and they deserve to be punished or whatever. But I say there's two
ways of looking at it. First of all, there's a bunch of trans people ignoring their work and spending their time talking about personal trans issues in an inappropriate forum in an inappropriate way when they should have been working. So they got fired. That's one way to think about it. Another way to think about it is Tulsi Gabbard is part of an extremely anti-trans administration that is looking for any excuse to fire trans people, even if it harms
our ability to do intelligence work. And this is a cover story for doing so. Again, I'm not close to the sources, so I can't operate with any certainty. But giving, given what we know the administration is actually doing against trans people and how like aggressively they're going after this issue. I think it's more likely to be number two for me.
That's just logical. TSMC is planning to invest $100 billion in US chip plants over the next four years, according to President Trump's announcement yesterday, which was a few days ago now. And I think this is great. Get all these companies here, Apple to spin them up in Arizona. Like we got plenty of desert. We got plenty of wide open spaces. Let's build some giant solar farms. Let's get chip production here. Let's get Apple production here. Let's
get everything over here. Because the world is crazy. All right. I anthropic closed a $3.5 billion funding round that values the company at 61.5 billion. Revenue has grown 1,000% year over year. Really? Feels like they're quietly doing what OpenAI is loudly failing at. And I think this should be really encouraging for companies that enter spaces that seem like they're already won. Like you wouldn't want to go up
against OpenAI in, say, early 2023. They just owned the entire AI world and there was nobody around to compete. And here we are beginning of 25, and we've got a few people competing and we've got open models competing. It's it's really encouraging, actually, as you know, if you're trying to get people excited about being creative and building, it's it's great to see somebody that was untouchable two years ago be very touchable now. OpenAI released GPT 4.5.
Most people are not happy with it. It a lot of people are saying it doesn't feel smarter at all. And to me it felt like like a push, like PR release because sonnet 3.7 came out. Other models have come out recently and yeah, it just it didn't go well. They should have paused on that and waited for a bigger release that was more more baked, more proper marketing around it and everything. Chinese government is telling its AI experts to avoid US travel because they're worried they might
leak Chinese secrets. Yeah. Leaking AI secrets. China's worried about it. Cool. Roger Penrose explains why Godel's theorem mathematically proves that artificial intelligence can't truly replicate human consciousness or understanding. I disagree with this, obviously, but it's Roger Penrose. I think he's a great thinker, or at least has been a great thinker on a lot of different topics. I know that there are a few areas, not counting AI, where I disagree with him, but doesn't matter. He is a top
tier thinker and he disagrees with my position. Therefore I am putting it in the newsletter. Marc Benioff says Salesforce won't hire engineers in 2025 because their AI agents are doing the work so effectively. That sounds awesome and I kind of believe him. However, this is also direct marketing for Agent Force, which is their product. So hard to tell if this is marketing or real. New study shows that large language models are just echoing logical patterns without
real understanding, which explains why they struggle with complex reasoning. Again, I want to see the best possible arguments against my position. Technology. Waymo now handles over 200,000 paid robotaxi rides weekly across three cities 20 times growth 20 x growth. That's how you're supposed to say that 20 x growth in just two years. BYD and DJI have created a $2,200 vehicle mounted drone launch platform that lets your BYD vehicle deploy
and retrieve drones while driving up to 15mph. This is the kind of cool stuff that I'm talking about that BYD is coming out with. They also had a video recently of their electric car doing a turbo boost, like over a pothole or something. Insane stuff. Like they're doing some pretty innovative stuff. I'm worried about us carmakers competing against them, I really am. It's like, this is why I'm kind of like, I like Tesla as a foil
against them, right? I'm obviously very unhappy with Elon right now. So that's that's got me a bit torn. But I do not want American car companies to fall to BYD because their cars are way cheaper, way cooler, and you know, no one else can keep up with them. It seems like Tesla's got the only chance. All right, humans, all right. The housing theory of everything argues that our catastrophically restrictive housing policies create cascading problems across society, from inequality to
climate change. Many Doraiswamy Doraiswamy talks about how solopreneurs are becoming real competition to traditional companies thanks to AI coding, top MBA programs, seeing way fewer grads landing jobs quickly. With Harvard's unemployment rate jumping from 4 to 15% since 2021. Think about what a Harvard MBA with potentially not too much experience in the real world of business is able to do in a world full of very, very smart AI. What exactly are we getting from this Harvard MBA? I
think so many things. This is kind of a meta analysis here. So many things that are extremely expensive and elitist. And they are those things because they are supply constrained. You can't get into Harvard. Therefore, Harvard is worth a lot. A degree from there is worth a lot, right? You it's hard to raise $200 million from a VC. Therefore, any company who makes a product that's backed by a VC and has raised $200 million. The product must be good.
This is dying. This concept is dying. That's why things like Harvard are going to massively struggle, because the value of what they're actually producing is not going to be as good as if you watched a whole lot of YouTube and read a whole lot of books and did a whole lot of writing, right, and moved ideas through your mind very quickly, and actually worked with companies and had experience on the ground. The difference between that education and a Harvard education is about to be pretty vast,
benefiting the YouTube side, not the Harvard side. Yet the YouTube one is available to you if you want to study right. This dynamic is going to change startups. It's going to change education. It's basically changing everything. It's like the it's not just AI, right? This is also a content explosion. This is also a YouTube trend vibe, where all the best content is coming out into blogs, coming out into YouTube. By the way, YouTube is blogging, right?
It used to be that there were very few bloggers. Now there's very few YouTubers. I mean, a tiny percentage of 1% of people in the United States are actually putting out content on YouTube, just like it was with blogging. But that's where the content is, okay, both in blogging and newsletters, but also YouTube. This is the front edge of things, right? This is the front edge of things. This is where the content is. So just something to
keep in mind. Researchers mapped porn titles from 2008 to 2023 and showed a disturbing migration in themes from Hot Blonde gets you know what to increasingly incestuous and violent concepts like I can't resist my stepsister. I find this really disturbing because it's like, I would love to get AI to look at this. Actually, I should go find this data set. This link here is actually the GitHub with the data in there and the data analysis. But I imagine you could take all those titles 2008 to 23.
So what is this? This is like 1013 plus 215 years of title data, 15 years of title data that you could put into, like whatever Google Studio. It would probably be pretty hard to actually do this on a on a pinnacle model, because it's going to be like, uh, what is this? These are all nasty words. Um, but maybe if you told it that you were doing research, maybe that would work, uh, like a prompt injection bypass technique.
But anyway, the point is, you can put this into, like, Google Studio Flash 2.0 something with a large context, maybe even, like, you know, um, oh three mini oh one Pro. Uh, sonnet 37, one of the smarter ones. And just be like, I have a question. What the hell is happening to my society? Seeing these, uh, titles migrate from 2008 to 2023, what is going on? Uh, I'd be really excited to, like, see what it actually gives as an answer to that. Right. Well,
I say excited. Uh, depressed. Um, yeah, I don't know. It's not quite depressed. It's more like worried. Concerned. Yeah. Anyway, number of young people not in education, employment or training. Neet has hit an 11 year high. I didn't use an Oxford comma. I'm canceling everything and getting off the internet. This is highly, highly scary. Yeah. I wonder if I copied something here from a sentence. Because this is a British story. Yeah. Did I copy and paste this Neet part?
I don't know why that doesn't have an Oxford comma. It seriously disturbs me. All right. Solarpunk is a subgenre of sci fi that imagines a sustainable future where humans and nature thrive together using renewable technology. I've been a fan ever since the Chobani commercial by one of my favorite commercials ever. It is a yogurt commercial about solarpunk. It is fantastic. Go to the newsletter and click on
this link. It is the best. Um, although talking to my friend who I'm hoping to have on the podcast soon, he's like, well, you know, Nazis love the solarpunk stuff. I'm like, why is it every time I tell you something's cool, you're like, yeah, there's a subgroup of like, crazy Catholics who are like super into this and they're like all very trad trad like, um, oriented, and they're trying to destroy technology or whatever. And I'm just like,
oh my God. Every time I talk to it, every time I talk to him, I'm like, what is going on? But that should be a cool podcast if that happens within the next couple of weeks. We're going to talk about all the different sub political movements going on in the Bay area where we live, but also in general post liberalism ideas like that. It's going to be an insane episode. And Ellen might go on Jon Stewart's show and discuss Doge. That would be a cool debate slash discussion,
hopefully a discussion. Hopefully it's good. Ellen, who shows up. All right. This is the end of the standard edition of the podcast, which includes just the news items for the week to get the rest of the episode, which includes my analysis, the discovery section that has all of the coolest tools and articles. I found this week, the recommendation of the week and the aphorism of the week.
Please consider becoming a member. As a member, you'll get lots of different things from access to our extraordinary community of over a thousand brilliant and kind people in industries like cybersecurity, AI, technology and the humanities. You also get access to the UL Book Club, dedicated member content and events, and lots, lots more. Plus, you'll get a dedicated podcast feed you can put in your client that gets you
the full member edition of this podcast. To become a member and get all of that, just head over to Daniel maestro.com/upgrade. That's Daniel miessler.com/upgrade. We'll see you next time. Unsupervised learning is produced on Hindenburg Pro using an SM seven B microphone. A video version of the podcast is available on the Unsupervised Learning YouTube channel, and the text version with full links and notes is available at Daniel Mysa.com slash newsletter. We'll see you next time.