UL NO. 444: Pizza Meter Intelligence, China Bypasses Bans, Securing AWS Secrets… - podcast episode cover

UL NO. 444: Pizza Meter Intelligence, China Bypasses Bans, Securing AWS Secrets…

Aug 09, 202425 minEp. 444
--:--
--:--
Listen in podcast apps:

Episode description

What to expect at Blackhat/DEFCON, Identifying Explosives, OpenAI's new models, Llama 4 Timeline, and more… 

➡ Check out Vanta and get $1000 off:
vanta.com/unsupervised

Subscribe to the newsletter at: 
https://danielmiessler.com/subscribe

Join the UL community at:
https://danielmiessler.com/upgrade

Follow on X:
https://twitter.com/danielmiessler

Follow on LinkedIn:
https://www.linkedin.com/in/danielmiessler

See you in the next one!

Discussed in this episode:

Intro (00:00:00)
OSINT and the Pizza Index (00:01:08)
Agent Framework Development (00:02:12)
State of Cybersecurity (00:04:08)
Critical Security Vulnerabilities (00:05:27)
Ransomware Trends (00:06:25)
Data Breach Costs (00:07:29)
AI Developments (00:08:40)
California AI Regulation (00:09:42)
OpenAI's GPT-4 Launch (00:11:01)
Tech Company Updates (00:12:03)
Shifts in Workforce Dynamics (00:13:07)
Prisoner Swap News (00:17:06)
Shark AI Model (00:18:03)
Dementia Prevention Insights (00:19:03)
Genetics of Self-Control (00:20:12)
Name and Appearance Study (00:20:12)
Alzheimer's Disease Research (00:20:12)
Dungeons and Dragons Rulebooks (00:20:12)
Novelists Writing Bug Reports (00:21:22)
Recent UBI Study Analysis (00:21:22)
Free-Range Kids Initiative (00:21:22)
Discovery Farm Bot (00:22:13)
Super Memory AI (00:22:13)
Avi Shipman's AI Pendant (00:22:13)
Installing Fabric (00:22:13)
Fleet Open Source Tool (00:22:13)
SOC2 Policy Templates (00:22:13)
Clutch Security Platform (00:22:13)
Black Hat Reminder (00:23:48)
Aphorism of the Week (00:23:48)

Become a Member: https://danielmiessler.com/upgrade

See omnystudio.com/listener for privacy information.

Transcript

S1

Whether you're starting or scaling your company's security program, demonstrating top notch security practices and establishing trust is more important than ever. Vanta automates compliance for Soc2, ISO 27,001 and more, saving you time and money while helping you build customer trust. Plus, you can streamline security reviews by automating questionnaires and demonstrating your security posture with a customer facing trust center, all

powered by AI. Over 7000 global companies like Atlassian, Flow Health and Quora use Vanta to manage risk and prove security in real time. Get $1,000 off Vanta when you go to Vanta comm slash unsupervised. That's vanta.com/supervised for $1,000 off. Welcome to Unsupervised Learning, a security, AI and meaning focused podcast that looks at how best to thrive as humans in a post AI world. It combines original ideas, analysis, and mental models to bring not just the news, but

why it matters and how to respond. All right, welcome to unsupervised Learning. This is Daniel Miessler. I am in a hotel room in Vegas recording because I have to. But it is Hacker Week and I'm here for Defcon and Black Hat and all that good stuff, but I wanted to get the episode out. So first note here is that Osint is one of my favorite hobbies, and

there's something called a pizza index. That's one of my favorite examples of this, which is how much pizza essentially the neighborhood around the Pentagon is ordering, which really means the Pentagon. And there's another index related to that, which is how many people are in the bars and this person real. Ben Geller posted a tweet about this. And essentially it says it shows that the people in the bars is like extremely low and the pizza meter is

off the charts. And I just love this so much because it indicates pretty strongly that something is about to go down. And I've got a friend who used to be an analyst at the Pentagon, and he says, this is absolutely true. When people are ordering in pizza and nobody's going home, it's obviously because something is going down. And in this case, we kind of know what's going down, which is Iran is preparing to attack Israel and or whoever else. So that's what that is. But definitely check

out this tweet. It's pretty interesting. So this is also why I can't wait to fully build out my agent framework, and for agent framework to become more tightly integrated with models and platforms, because it's going to allow a lot more people to do things like this. What I love about it is you could track all the different experts, right?

I'm going to use a whole bunch of different stuff for this, but there will be some agent functionality in the middle to sort of handle, like orchestration and summarization and creating like an Intel report. But I love the idea of like gathering all these individual, hopefully standalone intelligence sources, aggregating them together, but also keeping them separate and then

triangulating on truth based on that. And I heard some pretty cool ideas from, um, it was actually a friend of mine named John, uh, who was talking about how you want to rate those different sources in different ways. One way to rate them is to rate them based on their difference and their uniqueness of ideas relative to other people, because you don't know if they're actually just reading other people's Intel and following along, and you don't want to use eight of those people who are all

following the same thing as eight different ends, right? Eight different sources of of data or sources of signal. So there's a whole bunch of cool stuff that you could do once you have the discrete signal coming in from all these different places. And then you could factor in things like prediction markets and stuff like that and just, uh, lots of different stuff you could do. But ultimately what I want is a daily Intel report, which is as good or better than what you would get from, like

Stratfor back in the day. Or you know what a lot of these paid platforms would do. Um, or even like, you know, a high level government thing, I think we could build something really, really good that leverages the intelligence of all these different, really smart people who are posting their stuff online. We're not talking about private stuff. We're talking about people on Twitter. We're talking about people on different platforms, blogs. They're writing their stuff out there. And

a lot of times nobody's reading it. But if you put the effort in, you could find all those signals and start triangulating. So really excited about that. Okay. The state of things. Yeah. I wrote a long piece about, uh, this is I posted it on X, a fairly long piece. I should probably turn it into a full blog, but it's a little bit, uh, long winded and it's got some politics in it, so I think I'm going to skip it, but I recommend going and checking it out

if you're into that kind of stuff. And I spoke with Christine Gadsby, the head of product security operations at BlackBerry, and we talked about the role of AI in cybersecurity and a whole bunch of different topics. The topic list for this episode is quite large and you should absolutely check it out. Um, so so go check that out on the YouTube or you can click it in the

newsletter as well. So for security, two critical ServiceNow vulnerabilities were reported by Asset Note company has reportedly paid a new record high 75 million to a ransomware group. And that seems like a lot of money. But it's not a lot compared to not being able to do business at all. So a lot of people kind of beat up people for paying ransoms. And it really is kind of similar to like, your kid gets stolen. It's like all the philosophy goes away when somebody has your kid

and it's the same as a CEO or whoever. When you have the ability to pay some money and get business back online, sure, they might just ransom you again. Sure, they might do whatever. Sure, it might be bad for other people. But when business has stopped, things become quite clear to you in terms of what you need to do. So I'm not saying people should pay or anything like that. I'm not making any judgments. I'm just saying, well, essentially

that don't make judgments. Try to avoid making judgments because it's really hard to be in that position. Digicert is revoking 83,000 TLS certificates due to a domain validation bug. China is getting around US bans on advanced AI chips through smuggling front companies and loopholes, basically finding ways to get the chips that are not supposed to be getting.

Ransomware attacks are rising, with an 18% year on year increase reported by Zscaler, and I've always considered ransomware attacks to be something that we'd have to invent as a government. It would have to be like a government service if if it didn't exist in the marketplace, like as a way to test for bad security. And maybe you give like a fine or something if people keep having the mistake.

But my intuition was that after a number of years that we'd get harder and harder because security would increase. So if these attacks are still increasing, I wonder what the reason is. Is it because attackers are moving to like the more vulnerable targets, or are they just getting better at finding the holes or something else? Or all of the above? Probably all of the above. But if somebody has more insight on why things aren't getting tighter or see, that's the trick is it doesn't mean things

aren't getting harder. Just because the number of attacks are going up doesn't mean things aren't getting harder. They might just be getting better faster. Got a great analysis here of securing secrets in AWS. So the blog post discussing creating custom implants for evasion by building them in C and this thing details server setup, client functionality, and testing against security tools. The average cost of a data breach

jumped 10% to 4.88 million in 23. China is tightening its civilian drone export rules starting September 1st to prevent use in military or terrorist activities. Yeah, I'm trying to figure out if this is CCP trying to keep it their stuff from being used against them, or if they're trying to make it easier to sell their products because they're playing nice and they're appearing to be good guys.

AI and tech open AI has started rolling out its new ChatGPT voice feature for ChatGPT plus users, and it's quite good. It's it's quite a bit different. You can basically interrupt it. It sounds a lot more natural. I am getting a lot of voice artifacts though, like it'll sound like choppy and broken and a lot of weird pauses. Not like in a human way, but I think the platform might be overwhelmed. Or maybe, I don't know, maybe I need to restart the app. Maybe it was buggy.

Not sure. Lots of AI talk at Blackhat, which, uh, yeah, already here and it's already happening. Um, another thing to mention about the ChatGPT stuff is, uh, Greg Brockman is taking a sabbatical. Uh, John Schulman, I think, is leaving the company. Is he the one that went to to anthropic? I can't remember. Another leader went to anthropic and another one left as well. So the three people left all at once. But it's not like a mass exodus all to one place they're not mad at. OpenAI seems to

be fairly benign. Um, but it does look kind of weird to have an announcement where three people leave at the same time. A funniest joke I saw about this was that Sam Altman predicted that soon there would be a one person unicorn company, and the joke was, yeah, it might be your company. You might be the only one left. Um, I thought that was kind of clever. California's SB 1047 safe and secure innovation for Frontier Artificial

Intelligence Models Act. That's a long name. It's looking to regulate large AI models by mandating safety features to prevent catastrophic incidents. Use risk based AI regulation began on August 1st, and it's got staggered deadlines based on low or no risk versus high risk and limited risk tiers. So that's starting to roll out. And OpenAI has launched the GPT four long output model. I've already switched all or at least a lot of my stuff. I switched my fabric

prompt over to this. So it's got 64 output tokens, 64,000 output tokens, which is 16 times more than the previous one, and it's 50% cheaper for most things. And a lot of people are saying that the benchmarks, it's actually much better than the previous one. So I consider it just a straight across upgrade plus being cheaper. So yeah,

I already made that change. Google's experimental Gemini 1.5 Pro has claimed top spot on a bunch of leaderboards, surpassing GPT four and, uh, sonnet 35 with a score of 1300. I've not used it yet, because every time I try to use a Google product, I have to vomit. But I am going to try again soon to see, uh, if it's usable. Meta says it'll need ten times more computing power to train llama four compared to llama three. Elliott Management is calling Nvidia a bubble and says AI

is overhyped. They mark. They argue that the market is overly optimistic about AI's potential and Nvidia's role in it. I think it's a bubble, but it's a bubble like the internet in 1995. In other words, there absolutely will be a burst of lots and lots of companies, right? Pets.com and companies like that, the AI equivalents, thousands of those companies are going to fail. Lots of investors are going to be very sad about this, but that's completely unrelated to what AI is up is about to do

to the world. Right? So I think people shouldn't be confused about those two things. One happening doesn't mean that the other one is not going to happen. Bellingcat has put together a guide on identifying explosive ordnance in social media imagery. CrowdStrike is facing a massive lawsuit after Blue Friday crashed over 8 million computers globally. Intel is laying off over 15% of its workforce as part of a $10 billion cost reduction plan. Apple just posted a record

breaking Q3 2024, $86 billion in revenue. And one thing that's interesting about this is Berkshire Hathaway just sold a whole bunch of stuff, uh, a whole bunch of Apple. And they sold it right before this crash happened. The crash happened. There was a giant recession that hit the United States, and then it went away the next day. Today was like a lot of that money came back. But, um. Yeah, strange. Who knows? It could happen again tomorrow. But very volatile,

very emotional sort of time. I feel like in lots of different ways, and I feel like the stock market is matching that. But, uh, the other thing to mention about Apple is that their services money is now almost equal to their devices money, which is a huge tipping point or a milestone in terms of their growth. Apple is ramping up spending to get Apple intelligence ready for launch in the fall. I'm already using the beta, and it's pretty impressive, even though a lot of the features

aren't rolled out yet. All right, human news. A lot of the world tried to push Huawei out of their infrastructure, but they're actually getting more successful, not less. Software company increased user engagement by eight times by drastically shortening their emails. Netlify fees. Is that it? Yeah. Netlify fees. Initial 150 word emails had a 1% reply rate, but by cutting the text to 37 words, it went to 4%. And when they went to 14 words, it went to 8%

14 words. Last month, Shane Mack offered everyone at his company $25,000 to quit and six people took it. Yeah, I think this is part of the Alaskan fishing boat thing that I wrote a while back. Companies basically want fully dedicated murderers is all they want. They want people who eat, live, sleep, think and are obsessed with the company. That's why they want return to office. That's their way of filtering for people who who think of the company

as a religion. I mean, they can't say that, but they can say you have to come to the office and that's an automatic filter for it. Right? So this is the way that management and managers and the whole whole system can basically look for these obsessed people, which are likely to be in certain demographics, right? Certain ages, you know, certain groups that are awfully likely to look

kind of similar to each other. Probably young, probably without kids, probably male, who are just grind, grind, grind, don't care about anything else. Yeah, whatever. Work life balance don't care. I just want a code or whatever it is. Right. So that's what these companies are looking for more and more. And that's why I think and this is just my hypothesis here, I don't, you know, we need more data for all this. But my pet hypothesis here is that

this is a factor for all of these layoffs. It's like this awakening across all of business that you know what I want hardcore crazy people, religious people about this company. And I want them to be a-players. And I want them to be really good at AI, and they're going to help us do even more with AI because they're going to bring the AI on and blah, blah, blah.

So it's like, I'm going to hire a bunch of these crazy people, and a team of ten of them is going to be like having a team of 1000 or 2000 people sometime in the future, in the near future. Whereas if you get a bunch of people who are just straight out of college, they're entitled. They think they are owed something even worse. They think that they're about to receive training on the job because they don't know how to do the job. And it's like, okay, well,

now train me. Now, teach me how to do this job. And all these leaders at these companies are like, I do not want you. I don't care what degrees you have. If you can't do the job on day one, or you can't learn instantly, like just by seeing it once, and if you're not obsessed about it and want to sleep under your desk, we have no use for you. And unfortunately, that's like 80% of the workforce, I'm guessing, right? It's like 52, 90% of the workforce, let's call it that.

And what that means is they are looking for that 10%. They're looking for that 5%. They're looking for the A players who are dedicated like religious people. And I believe this is what we're seeing more than anything now. You add AI on top of that. Now you see why there's so many layoffs. Now you see why there's so many open positions. But nobody's hiring for them because they're kind of like fake positions. And this is multiple hypotheses

all rolled into one. But you get the vibe. This is the basic vibe of what I think is happening. Journalist Evan Gershkovich was among a group of Americans and Russian dissidents released from Russia in a seven nation prisoner swap, largest ever since the Cold War. Researchers at the University of California, Santa Barbara have developed an AI model called shark AI to help prevent shark attacks. The model uses

drones to detect sharks with greater accuracy than humans. I love this, I love this every time I go to Maui, I'm stupid and I read the stats about like, shark attacks and they're like, oh, actually, right next to you is the most dangerous place. And I'm like, cool. I didn't want to go in the water anyway. Why did I read that right before I went on vacation, where

I'm supposed to swim in the water? But anyway, if I were able to look up and maybe they're so high up you can't hear them, maybe it's not super annoying. But anyway, if I know that there's ten of these drones sweeping back and forth and you know they're being recharged, they go back on rotation and they look down. They could see very clearly if there's a shark in the water, I assume that it wouldn't work. Maybe if the water was muddy, but maybe you wouldn't be swimming anyway because

it would be dangerous water anyway. Usually in a lot of places you could see right through the water. It's very easy to see a shark from above, and they just call the lifeguard station and trigger an alert, and they blow the whistle and everyone gets out of the water, like, that's going to be amazing. Love it. Treating failing eyesight and high cholesterol are two new ways to lower the

risk of developing dementia, according to a major report. The Lancet Commission's latest findings suggest that addressing 14 health issues could theoretically prevent nearly half of all dementia cases worldwide. And I believe from reading this that essentially they're talking about things that just exacerbate it and make it worse. So, for example, if you can't really see things, you generally maybe you don't go out a lot. If you can't

hear conversations, you're not involved in conversations. So I think a lot of this might be related to social interaction, which once you start to get isolated and you're not consuming media, you're not reading, you're not like, there's no new inputs. Um, again, this is my hypothesis. I believe this is based on some solid science I've already read, though is basically, once you get isolated in that way, your brain starts like shutting down, and it really accelerates

the dementia. So that that would make sense if, uh, that's what they were saying in this paper. Self control is about 60% heritable, meaning genes explain roughly 60% of the differences in self control among individuals. So I think this could be devastating if it's supported in further studies. I worry about the narrative that both IQ and self-discipline are mostly genetic, thus giving people an easy ramp to write off individuals or even groups if they have lower

averages of these things. And I think even if it were true, the groups don't define the individuals and the study mentioned individuals here. It's not talking about groups, but you know, people are going to people. Right? So the the other thing is there's likely a lot of slack in, say, the 40%, which is environmental, assuming those numbers are correct, like we're probably getting whatever, 10 or 20% of the 40%

we're supposed to be doing. So if we were to increase, you know, the efforts of, you know, training and culture and all the environmental things we can control, I think that would raise, you know, the bar for, well, everyone, but especially the bottom quite a bit. So I'm not sure this is really anything too much to despair about other than making it easier for certain negative narratives. A new study reveals that people tend to alter their appearance

to match their names. Researchers found that adults faces often align with a social stereotype associated with their name, while children's faces do not show this pattern. A key protein called reelin may help stave off Alzheimer's disease. A number of new studies suggest that reelin helps maintain thinking and memory in aging brains, and when its levels fall off, neurons become more vulnerable and people are starting to obviously

work on drugs for this. Wizards of the coast will release the 2024 Dungeons and Dragons rulebooks under a Creative Commons license, which is fulfilling a promise they made after the backlash over attempts to change the Open Gaming License. If novelists wrote Your bug Reports imagines how famous authors would describe software bugs in their unique styles, Ernest Klein likens a screen flicker to scenes from back to the Future and Ghostbusters, while Ursula K Le Guin philosophizes about

the existential pain of coding errors. Ideas. More analysis on how bad the results were of the recent UBI study done by Sam Altman. It looks pretty bad, just like we talked about last week, and got a link here to go into that in depth. And really cool idea

from Jonathan Hite about free range kids. And a cool idea for giving them freedom is to create a play street once a month where you close off a street for two hours, give time for kids to play in the street safely, and then the whole time that the parents are there watching, like around the edges and, you know, whatever. But the neighbors are also meeting and talking, and he's saying it has transformative effects on the neighborhood and just good times all around. I really love ideas like this.

Discovery Farmbot is an open source farming machine for growing food in your own backyard. Super memory an AI powered platform to organize, search and utilize saved information acting as a digital second brain. Friend is Avi Schiffman's new AI pendant, and it's designed to combat loneliness by sending you reassuring or playful text based on what it overhears so it doesn't have a speaker. It actually sends you notifications. Kind

of interesting way to do that. Daniel Kosman walks you through installing fabric and open source AI framework by Daniel Missler. That's weird. I wonder if I wrote that because I don't talk about myself in the third person. Fleet is an open source version of fleet DMs tool built on OS query for vulnerability monitoring, MDM detection engineering and more. Soc2 policy Templates collection of templates for Soc2 policies and procedures.

Clutch security is a platform providing visibility into all non-human identities within an organization, helping them identify associated risks and the recommendation of the week. If you're at Blackhat this week, remember that ten and 20 years from now, you will not remember the talks that you saw this year, but you will remember spending that time with your friends. So prioritize friend time over presentation time. Not only is the friend time more precious and valuable, but you can get

the talks later if you really want to. And the aphorism of the week friends show their love in times of trouble, not in happiness. Friends show their love in times of trouble, not in happiness. Euripides. Unsupervised learning is produced and edited by Daniel Miller on a Neumann U87 AI microphone using Hindenburg. Intro and outro music is by Zomby with the Y, and to get the text and links from this episode, sign up for the newsletter version

of the show at Daniel miessler.com/newsletter. We'll see you next time.

Transcript source: Provided by creator in RSS feed: download file