Whether you're starting or scaling your company's security program, demonstrating top notch security practices and establishing trust is more important than ever. Vanta automates compliance for Soc2, ISO 27,001 and more, saving you time and money while helping you build customer trust. Plus, you can streamline security reviews by automating questionnaires and demonstrating your security posture with a customer facing trust center, all
powered by advanced AI. Over 7000 global companies like Atlassian, Flow Health and Quora use Vanta to manage risk and prove security in real time. Get $1,000 off Vanta when you go to Vanta comm slash unsupervised. That's vanta.com/supervised for $1,000 off. Welcome to Unsupervised Learning, a security, AI, and meaning focused podcast that looks at how best to thrive as humans in a post AI world. It combines original ideas, analysis and mental models to bring not just the news,
but why it matters and how to respond. All right, welcome to unsupervised learning. This is Daniel Miessler. Okay. Leigh Honeywell had probably the best or coolest little quote here about, uh, the CrowdStrike incident. Any sufficiently bad software update is indistinguishable from a cyberattack. Love that. Had a wonderful couple of days celebrating my buddy Jason's birthday, and it's actually coming up on the 27th and mine is on the 26th, so it's a little bit in the future. But we
celebrated last weekend and it was super fun. Did a presentation for a UN group on the future of AI and employability, and that should be coming out soon to YouTube. So look for that soon. Doing another UL dinner in Vegas. This is going to be cool. Mad props to all the people who had to hustle and grind this weekend as a result of Blue Friday, and let's get into it. So I've been mostly heads down on the class, so I haven't written any new essays. Security, CrowdStrike. Outage I
think we all know the deal at this point. Banks, airlines, hospitals, media companies. It was probably the largest IT outage. I think it's probably up for debate and people are probably still talking about it. So I'm trying to come up with that. Lessons learned here. But perhaps the biggest one is around PR. So at first the CEO came out and basically said something like, don't worry, this isn't a security problem, and that's not the way to go because
the internet's basically down as a result of you. So I don't think that was the right stance to take. He did much better later on. I think his subsequent posts were much better, and where he eventually landed was great. But, um, I like what my buddy Chris Hoff said about it, which was, uh, this was not a security attack against CrowdStrike or its customers, but an outage caused by a bad software update. And this was just he was proposing a possible option to to what was actually said. And
I like this take by Chris. And Chris has been a CISO, uh, through some pretty rough times. So he knows the deal. And another thought I had is that this would be probably much less likely to happen if, say, everything was Microsoft, right? If the EDR was just part of Microsoft. Kind of similar to the way that defender is getting so good that it's almost as good as any third party option. And I think this is the natural flow, right? You start off with a cool app idea.
It eventually becomes like a big company, and eventually that moves into the platform. And you see that with all sorts of things like location services, antivirus and stuff like that. So I think that's likely to just happen with EDR as well. And the whole point is, maybe Microsoft would have handled that better or avoided it if the testing and everything like that was more deeply integrated with the OS itself, because they are the OS and they are
the other one. Now with this, you actually get a different problem, which is more eggs in fewer baskets, right? So there's a trade off there. But I feel like this is the natural trend to basically have everything that's awesome eventually end up in the platform itself. Got a new threat actor called Crystal Ray using open source tool called SSH snake to move laterally across networks, exfiltrate credentials
and deploy crypto mining software. GitHub has warned developers about a social engineering campaign by Lazarus, which is North Korean group, and they're targeting developers in cryptocurrency gambling and cybersecurity. And essentially what they do is kind of what happened with the Linux attack recently. They gained trust over time, and then they start submitting malware once you trust them to
submit things. Special thanks to sponsor dropzone. And I wrote this copy myself here, which I'm not going to read here, but definitely check out dropzone dot I and check out a demo there. So not only are they sponsoring here, but I'm actually an advisor for them as well. That's how much I believe in their stuff. Palmer Luckey, the guy who created Oculus, is now making AI weapons for Ukraine through his company Anduril think it's Anduril that looks like a Lord of the rings or some kind of
fantasy thing. See if this identifies it. Open source? Nope. Can't identify it. All right. Yeah. So he started Anduril to build AI driven weapons like drones and submarines, but they're now being used by the Pentagon and sent to Ukraine. China is installing massive amounts of solar and wind energy, adding ten gigawatts of wind and solar capacity every two weeks, which is like building five large nuclear plants every week. And this really, really makes me mad because I, the
US needs to be doing this. Every time I fly. I fly over wide open country that is getting hit by the sun. And it's windy down there and it's like, oh, we don't have enough jobs for people and we don't have enough energy, and we have way too much open land being hit by the sun and wind. Why can we not do this, especially when we know that AI is going to need all this energy and we just we need to put people to work and we need the energy, and we have the wide open space and
we need to build more housing. So it's like this is the perfect opportunity for a massive, like FDR type event, which hopefully the next president gets on this because yeah, I do not like the idea of China having all this extra energy to do whatever with and us not having it. Iran and China are increasing their foreign influence efforts, using social media to stoke discord and promote anti-U.S. narratives.
And this is coming out of Google. They blocked over 10,000 instances of Chinese influence activity in Q1 alone, thanks to Nudge Security for sponsoring the US Department of Justice, seize two domains and search nearly 1000 social media accounts used by Russian actors to spread pro-Kremlin disinformation. Cloudflare says nearly 7% of all internet traffic is malicious, with DDoS
attacks making up over 37% of all mitigated traffic. And UK police arrested a 17 year old suspect of being part of the scattered spider hacking group involved in the 2023 MGM resorts ransomware attack, also known as this is the person responsible or the group responsible for making Defcon way farther north this year. It's now in the convention center, pretty far north of the city, which it looks like it's going to be pretty cool, especially for the different villages,
because all the villages are going to be together. Looking forward to that real time video transcription with timestamps. Whisper Diarization Diarization. Yeah, that's right, Beijing support has seen China make up ground in the AI race, but it still has handcuffed a bunch of AI Companies with really tight restrictions and a lot of these are just purely political. So they're basically saying this is going to be good
for control and bad for speed. So I think unless they steal some sort of pinnacle AI tech, which they probably are going to, as per Leopold and as per a lot of people talking about this, I think ultimately, if that doesn't happen, this closed nature of their approach to AI is going to hurt them, especially relative to the US, which is just running with scissors, which you can see because, um, meta just released llama three, uh, 405 B today and is basically giving it out to
the world. And by the way, it's not fully open source, but it's much more open source than other pinnacle models. But anyway, that stuff is coming out quickly. And so the US is innovating very, very fast. And you can't do that when you're China. And you have to be very careful what you put in the model. Because if you train the model on open knowledge, the model is going to figure out how reality actually is. And China doesn't want to present the world as it actually is.
They want to present the filtered version of the world, which means they have to do so much more protection and, you know, post-training tricks or, um, nuking the knowledge inside of the, the training data to not have the real knowledge of the world. That's another way to do it, to train the model and have it just be worse. It'll just be worse. It'll be a worse representation of actual reality. It'll be like, yeah, Tiananmen never happened or whatever. So,
you know, they're not oppressive. The US is bad. You know, they are the true free society or whatever kind of garbage they want to, you know, put in there. But that's extra work for them. That's extra friction for them, that's going to slow them down. And so that's one thing. It's going to be slower. The second thing is the people who want to go fast, who are in China. So they're super smart and they want to go fast and they're more like Western oriented. They're just going to leave.
They're going to go go to the US, to Canada, to EU. Not all of them, of course, but a lot of them. So ultimately this makes you slower as China the country. And it also produces more brain drain because people don't want to be there. Kaiser Permanente is using AI, wearables and other tech to bring health care directly to patients. Very AI forward approach from them, which I really appreciate. It's all in the implementation, of course, so we'll see how that goes. But I like the
open mind. Sam Altman revealed that OpenAI's Voice Alpha mode release is coming later this month, and llama three one just came out today, so you know they really need to speed that up. And I love what my buddy Matthew Berman said about this. He's a YouTube AI guy and he says, let's denormalize companies demoing products earlier than
three months before release. So Microsoft Recall did this. Apple intelligence did this definitely, because it's not coming out until like after September, maybe even not until beginning of 2025. And OpenAI did this big demo on Sora and GPT for voice, and it's still not out yet. And that was a long time ago. I mean, that was ancient history at this point. So yeah, three months, I think is like the max. You should do that. Don't even
talk about it if it's not coming out. Andre Karpathy is launching Eureka Labs to create AI teaching assistants for education. And yeah, this is going to be really cool. They're basically going to have AI teachers, which is just fantastic. Google has launched Project Oscar, open source platform that enables development teams to create AI agents that monitor issues, manage bugs and handle various aspects of the software lifecycle. Omega's
AI will map how Olympic athletes win. They're using AI to map how Olympic athletes win by analyzing their full performance, not just start and finish times. That includes motion sensors on athletes clothing to capture every detail of their movements. Compelling us is thinking about new trade restrictions that can stop Nvidia from selling its HG, H20, AI GPUs to China,
and people are worried. Investors are worried that it could cost Nvidia like $12 billion in revenue because, you know, China is big and they buy a lot of stuff if they're allowed to. But I don't think that'll ultimately matter for that long. So not too worried. As an Nvidia stockholder, Beijing scientists have developed the world's smallest and lightest solar powered drone just 4.21g with a 200 millimeter wingspan.
I want one of these. They can fly nonstop during the day because the motor is lighter and more efficient, and because the thing is so light itself, the actual craft is so light itself. I would love to mess with that. A Florida yeah, it's usually either Florida man. Yeah, a Florida man. It's it's either a Florida man or DNS. They're responsible for all problems. Uh, this person got arrested for shooting down a Walmart delivery drone claiming it was
spying on him. Typical of Florida men, shooting at drones is treated as a felony, similar to firing at a passenger aircraft. That's probably because the bullets keep going and might hit other things in the air. Unlikely, but turns out they they don't launch off into orbit or move towards the sun. They just fall down, potentially on people's heads or their pets or whatever. Waymo wants to bring robotaxis to SFO. They already have approval in the city,
but they need separate approval for the airport. And Microsoft is laid off. Their Dei team, probably one of them. I assume there's many, many, but laid off its Dei team saying it's no longer business critical. I'm not sure if this no longer business critical is an actual quote from Microsoft leadership. It must be because this article, which is a Business Insider article, probably quoted that correctly. But okay. I mean, I think the overall sentiment is worth noting,
which is a lot of people are doing this. A lot of people are rolling back Dei as like a prime directive and realizing it should be about merit, which I think is a good thing. I just don't want them to go back to just hiring people that look like you and sound like you, and respond to questions like you, because then you get rid of diversity, which is actually bad because you end up with like these,
these blinders. And you have to make sure that your meritocracy is has a wide enough top of filter, right, top of funnel. So you want to make sure you're getting everyone there into this meritocracy filter and not just looking for certain types of people because you think they, um, they come out of the bottom of the filter more often. Right? So I think the concept of Dei is 100% correct, and we need to 100% keep doing it. The implementation that we've done over the last few years has been bad,
and it's good to see that rolling back. Andreessen Horowitz argues that bad government policies are now the biggest threat to tech startups, which they call little tech. And they basically launched a manifesto about little tech. And I did an analysis of it in fabric. Probably should have included that. But it's basically pro startup stuff that's bottom line, which makes sense because they're a VC. Essentially, Google is shutting down its URL shortening service. so any links created with
it will stop working. If you have any important links using the service, you want to migrate them. I'm pretty sure Google will soon sell YouTube to like Johnson and Johnson and Gmail to Luxottica, and then go full speed into the what the f are we doing business? Whatever business that is, it's the single most perplexing company like I've ever seen. They were first on j'en ai. They wrote the paper for Transformers, and now they're completely lapped
by not just OpenAI, but anthropic as well. And now they're being lapped by meta, by meta in an open source model. How are you like fifth place when you have all the people and all the money and you actually invented the tech? They're like the opposite of Cloudflare, which does small things really well that add up over time because Google is like slowly dismantling everything. It's getting rid of all the best things that it has. The main thing Google is growing is the Google graveyard. Such
a colossal waste of money and talent. And these failures should be studied for centuries as an example of what happens when you don't lead with UX focused product management. Product management, like being in charge, like thinking about the customer and the total package and the total experience of this product or service that you're launching, as opposed to throw shit at the wall. Focused engineering and engineering is
in charge of product. It makes no sense whatsoever. Like like it is the most inefficient, like large organization I can imagine that I've ever seen. All right, and Google rant, I need a section just for Google rant. Humans. Iran backed Houthi rebels said they were behind a drone attack on Tel Aviv, killed one person and injured several others. USA household income distribution by state. A Reddit user shared a detailed visualization of household income distribution across different states
in the US. New meta analysis shows that tooth brushing can tooth brushing. Not heard it like that kind of verb. Is that tooth brushing? It's a gerund. Anyway, brushing your teeth can significantly reduce hospital acquired pneumonia in ICU patients. Simple intervention could lead to 17,000 fewer deaths each year. This is a this is a big thing I've realized recently. Um, or just gotten the religion on, which is all the bacteria like in your teeth and gums, that's related to
inflammation in the body. It's also related to the heart being in really bad shape. So, uh, really important that you kind of treat that infection in your mouth or the potential for infection in your mouth, like having a disease throughout your entire body. And they're talking about it going into the the brain as well, and just being this state of disease in your body and really, really focused on the mouth. So I've upped my game around that significantly as a result. That was probably like a
year ago or whatever. Young adulthood is no longer one of life's happiest times. Research shows that young adulthood is now one of the most unhappy times in life, with a significant rise in despair among people, especially women from 18 to 25. And if you've been following my stuff, I have lots of reasons for that which we will not go into most of Gen Z is using TikTok for health advice. 56% are using TikTok for wellness, diet and fitness, 34% are relying on it as their main
source of health information. About a third ask HN, which is Hacker News every day, feels like prison. A mid-thirties guy in tech feels trapped in a 9 to 5 job he no longer cares about, and is struggling to build a business on the side. This is the reason I do unsupervised learning. This is the mission of unsupervised learning is giving people mental models and frames and ways to think about the world, and actual methodologies for getting
true meaning in their lives. This problem right here is pretty much the whole game, and the way I plays into that is I think AI is going to make it worse as it removes jobs before we have the ability to have it increase jobs. There's going to be that that, uh, chasm in the middle. When that happens, when the jobs come away, this meaning crisis is going to get way worse. And that's what we are here for. All right. Got a cool button here. Read the full
newsletter online. Appreciate the team doing that. Ideas. Sam Altman is simultaneously building AGI and doing big studies on UBI. Super obvious what he's doing, in fact. Oh, this is kind of hot off the press here, because I just did this analysis with fabric, and I saw a tweet thread about this as well, the UBI study that came out. This is not in the newsletter. This just happened this morning. This UBI study that OpenAI put out, which was funded
by essentially Altman. It turns out at least one of the interpretations and some of the manual work that I just did doing AI analysis on the results, it did not increase a lot of the things that they were hoping their spinning it as a very positive outcome, and I'm sure they said there is some minor positive outcomes, but in general it did not encourage people to be more ambitious. It did not encourage people to go and get more work. It didn't do that. So it looks
a bit bad now. I think we need to do more analysis to say for sure that UBI is not going to massively help, um, or it's not going to help people be more ambitious or more creative or more productive,
which which I think is the hope. The hope is that or the idea here is that ambitious people are lucky people who have the time and energy to go and be ambitious, and that when you look at people who are struggling and they're not ambitious, it's likely because they have too many jobs, you know, they got too much drama in their lives. There's too much going on. And if that were to not be the case, if
they were, we were able to reduce their stress. They would become just like the go getters and just like the ambitious and smart people. And this study, being so large, has the ability to undo that narrative a bit and basically say that there's a chance here we need to do full analysis of this, but there's a chance here that what it uncovers is that no ambitious people have something else. They have this ambition and drive that lets
them thrive almost despite whatever is going on. So if they have lots of burdens on them, they're going to find a way through. Unless they're overwhelming, of course, But given the same amount of burden, people with this attribute of like drive and self-discipline and ambition, they're just going to crush it. They're going to find a way to crush it. And the whole narrative for this UBI thing was that, no, everyone has that. And I think I'm old enough at this point to realize, no, not everyone
has special beans or special magic. It really is a small portion of, you know, society who is in the top 10% of a given thing. Um, I mean, just by the numbers, like only 10% can be in the 10%, right? So I think first we need to confirm that that this is what it's saying. But even if it is saying the worst possible thing, which is no, this doesn't actually make people more ambitious. It doesn't really matter. We still have to keep looking for the thing that is
the ambition. We still have to find the source of this thing that is so valuable to us, which is people who are driven and ambitious. Now, if it's genetic, well, then I don't care, right? We're not we're not going to be changing genetics, right. Any time soon anyway. So it's like, I don't care if we find that there's more ambition in this group or less ambition in that group, because those numbers are likely to be so tiny compared to what we can control. You know, even if they
exist and they might not even exist. It might be all environment. But I mean, this is a three year study. This doesn't control, like what went into their their upbringing and their childhood and everything. Um, because if you look at cultures where they have this really, really strong drive and ambition and everything, it's part of their actual culture. This is like they pop out of the womb and it's like, pick your college, right? You were told from
a very early age that you must be producing. It's your responsibility to give back to community. All your, you know, family members are looking up to you. You know you owe us, you owe society quality outputs, and you must work hard to create those that cultural influence and power, plus all the peers that come with it. Plus then you go to nice colleges and they all believe the same thing. That entire mechanism is what we need to give to the entire world. Okay, that that is the
thing that I believe is creating ambitious people. So you do a three year study on UBI, you find out that doesn't make people into those people. Fine, fine. Something is and I believe it's that entire culture as a package. And that's the thing that we need to give to everyone. That's our responsibility. And if the world were more like that, the world would be better. First of all, that automatically
fixes crime. If you're trying to build things and doing things and you're busy, that automatically fixes literacy crime, like so many different things that are a problem. So I just I don't see this as too negative. I see it as a little bit negative, but positive in the sense that it's going to uncover the real issues, which is how to propagate successful mental models and frames, um, in a wider scope across wider groups of people across the entire planet. And I think that's the problem we
need to focus on and move towards solving. Okay, some standalone ideas here. And I just put the tweets in here because basically when I have ideas I, I um, post them on X and then come back to them to distill into the idea section here. So first one, what if cannabis? Cannabis, not cannabis is soma from the Brave New world. So it makes people comfortable with mediocrity, makes people more accepting of whatever they're handed, and makes
people less likely to change their situation. And legalization is actually happening coincident with the rise of AI, which is pretty cool. And I don't actually think that's a conspiracy. Just to keep in mind, I think they're just both happening at the same time. Conspiracy culture. Speaking of that is getting stupid at this point. Um, yeah, bad thing happens. It must be deep state. Um, Biden pulls out of the race. Must be deep state. I saw a reply
to my my post here. Somebody said the number of people believing the world is like that, where they see deep state and everything is inversely proportional to the number of people who have worked in a very large organization. And I think that is so smart, because when you work in these very large organizations or in the government or whatever, you just see constant stupidity, just constant stupidity
performed by really smart people. And the fact that they're in these large groups with lots of bureaucracy just makes the whole thing stupid and inefficient. So you see enough of that. And then when you see a botched Secret Service thing or whatever, you're like, yeah, the world is stupid. People are stupid. Like, where's I'm surprised anything works, right? That's the eventual attitude you get after you work at
enough of these large organizations. And this one here, I'm going to save, because this one is like, I'm going to do a whole essay and video on this one. Essentially, it is the use of AI for security on a whole bunch of intractable problems. And I believe this is the future of AI and security, which is why I'm going to turn it into its own thing. And the future of security and risk management is to have them
disappear into SOPs. This one, I'm also going to break out into a separate thing and got someone posted here. A great tribe of Silicon Valley is making a bid to take over the US. Vance is Thiel's man. Musk and Andreessen Horowitz are backing him. He will be the nominee in four years as an admin, steered by Thiel and Musk. Right when AGI is due and Starship goes to Mars. Incredible timeline. And basically I just said, hmm, interesting. I'll be watching this closely. I love looking at movements
like this. Like, what is Silicon Valley thinking? What are the top tier people? What are their political views? Are they ambitious? Are they pessimistic? Are they optimistic? What are they doing? What are they thinking? Who are they backing? What are their arguments? I want to know all this stuff. And I've got a buddy who is so good at
tracking this stuff and just knows everything about this. And I go walking with him, um, very often and have dinner and just get into these great discussions about, like, all these different zeitgeists happening. And it's just it's just fantastic. And it doesn't mean I have to agree with any of them. But hopefully if I capture enough of them and I understand them enough, it helps me triangulate and build my own view. All right. Discovery lemma new recon
security tool that runs via Lambda in your browser. Responder honeypot for responder tricks attackers into revealing their presence. Exo run your own AI cluster at home on everyday devices. Why aren't we using SSH for everything? By Shaso. Grey Swan AI specializes in AI safety and security tools to assess and safeguard AI deployments. And Costco's Apocalypse bucket. They're selling a 25 year shelf life emergency food kit called
the Apocalypse Bucket for $80. Includes 150 freeze dried and dehydrated meal servings ranging from teriyaki rice to apple cinnamon cereal. I bought three of these just in case things get crazy end of this year and into next year. Plus I'm in California. Lots of earthquakes and we're waiting for the big one. All right, recommendation of the week. Don't ask what someone's politics are because that's likely just to get gross. They'll just respond with a bunch of emotional things.
They will trigger you emotionally and it will be a complete mess. Instead, ask them what their ideal world looks like, including questions like these. Are there multiple religions? Are there multiple ethnic groups? Are people free to love whoever they want? Do we all live together? What are the most famous people in the world? Who gets paid the most? Who gets paid the least? What happens to someone if they're truly disabled and they cannot work? What happens to someone
if they're too lazy to work? What happens to someone who is addicted to drugs? I think many of our disagreements are about how and not what. Because I know a lot of people who support Trump, for example, who would say, yeah, you can absolutely be gay. Yeah, you can absolutely be transgender. No problem. Love to everyone. Yes, there absolutely can be other religions. Yes, all the ethnic groups should live together. Yes, there should be a safety net.
I literally know multiple Trump supporters who voted for Trump twice, who believe all four of these. You can be gay, you can be transgender, you can be other religions. We should all live together. We should all get along, and there should be a social safety net. Okay, so if you're on the left and you hear someone on the right say those things, or you're on the right and you hear the left say, you know some other stuff,
this is an opportunity for a real conversation. Both sides should describe what they view as an ideal version of the world. And I would argue to you that if you both start with that, it's going to look very similar. I think roughly the center 80 or 70% of the country is going to build that model, and it's mostly going to look the same, and it's mostly going to include those 4 or 5 things that I just mentioned.
So if you had a smart moderator in between, you could look at these two people and be like, okay, let me get this straight. You think there should be multiple ethnic groups and religions and gender identities, and we should all work together and all live together and be happy together and not infringe on each other. Is that is that correct? And both of them say yes. Do
you realize how much progress that is? Do you realize how much we need that conversation to take place right now, in the middle of 2024, going into this crazy election cycle? We absolutely need that. So that's where you should start the conversation. Don't start with like, hey, what are your politics? Because they're they're going to go to an emotional hot button that's going to trigger both of you. Instead, talk
about the ideal. And when you realize you agree on the ideal, now we have a how problem okay, now we have a transition problem. Cool. That's much more tractable and much more discussable than disagreeing on where we're supposed to go. All right. In the aphorism of the week, silence is a fence around wisdom. Silence is a fence around wisdom. German proverb unsupervised learning is produced and edited by Daniel Miller on a Neumann U87 AI microphone using Hindenburg.
Intro and outro music is by Zomby with the Y, and to get the text and links from this episode, sign up for the newsletter version of the show at Daniel miessler.com/newsletter. We'll see you next time.