UL NO. 438: Confusion is a Muse - podcast episode cover

UL NO. 438: Confusion is a Muse

Jun 27, 202437 minEp. 438
--:--
--:--
Listen in podcast apps:

Episode description

Sonnet 3.5 Support in Fabric, CISA AI Tabletop exercise, Kaspersky ban, China Invasion Scenario, Langchain disilussionment, more…

➡ Check out Vanta and get $1000 off:
vanta.com/unsupervised

Subscribe to the newsletter at: 
https://danielmiessler.com/subscribe

Join the UL community at:
https://danielmiessler.com/upgrade

Follow on X:
https://twitter.com/danielmiessler

Follow on LinkedIn:
https://www.linkedin.com/in/danielmiessler

See you in the next one!

 

Discussed on this episode:

Introduction (00:00:00)

Augmented V2 Registration (00:01:37)

Fabric Update and Sonnet 35 (00:01:56)

Personal Development and Authenticity (00:03:17)

Failures and Authentic Pursuits (00:04:32)

Chasing Personal Goals (00:05:30)

Articulating and Emoting (00:07:35)

Security Updates (00:08:33)

Tech Industry Developments (00:09:44)

Results as a Service (00:13:11)

Intelligence as a Service (00:14:21)

Structured Output from LMMS (00:16:38)

Innovative Projects (00:17:42)

Kludgy first generation AI frameworks (00:18:47)

Building a dependable AI stack (00:20:06)

Results as a service and trust in AI (00:21:34)

Solar energy vs. nuclear power (00:22:36)

Smartphone-free schools and societal impact (00:25:02)

Sun exposure and its health effects (00:26:04)

Intelligence collection and analysis (00:29:26)

Future of AI and automation (00:31:32)

Tech tools and discoveries (00:33:33)

Stoicism and gratitude (00:34:53)

 

 

Become a Member: https://danielmiessler.com/upgrade

See omnystudio.com/listener for privacy information.

Transcript

S1

Whether you're starting or scaling your company's security program, demonstrating top notch security practices and establishing trust is more important than ever. Vanta automates compliance for Soc2, ISO 27,001 and more, saving you time and money while helping you build customer trust. Plus, you can streamline security reviews by automating questionnaires and demonstrating your security posture with a customer facing trust center, all

powered by advanced AI. Over 7000 global companies like Atlassian, Flow Health, and Quora use Vanta to manage risk, improve security in real time. Get $1,000 off Vanta when you go to Vanta comm slash unsupervised. That's vanta.com/supervised for $1,000 off. Welcome to Unsupervised Learning, a security, AI and meaning focused podcast that looks at how best to thrive as humans in a post AI world. It combines original ideas, analysis, and mental models to bring not just the news, but

why it matters and how to respond. All right. Welcome to episode 438. This is Daniel Miessler. Okay. What do we have here? So registration for augmented v2. July 26th is now open. About half the slots are about gone. So sign up by clicking down below in the newsletter. Or you can just go to augmented dot Unsupervised learning.com and sign up. And, uh, if you become a UL member, you actually get $250 off. So inside of the chat, you can get the discount code, which is $250 off,

and becoming a member is only $99 for the first year. So. You could do the math there. Let's see here. Fabric now supports Anthropic's sonnet 3.5, which is really, really good. So if you update and then you do fabric list models, it will show you that, uh, cloth 3.5 is in there. And this is the actual tag for it. Claude dash three, dash five. Sonnet dash 20240620. Uh, one thing to note, though,

is sonnet and haiku. They throw a lot of copyright failures like it won't actually read podcast transcripts, which is super annoying because it's one of my favorite things to do is put long form conversations through these things and like pull out like lists of references that they mentioned if I want to order the books or whatever. So that's kind of annoying that it doesn't allow me to

do that with one of the lower models. Opus does allow me to do that, and GPT four does allow me to do that, but I can't do it in sonnet or haiku. So I'm really waiting for them to either fix that with the lower models or to give me opus 3.5. I cannot wait to use that. Definitely gearing up for Vegas black Hat Defcon. Hope to see you there! I'm going to have stickers, but, uh, don't

really have anything else other than stickers. Might eventually do a t shirt or something with like a superhero logo of just, uh, the UL logo, but no text or anything. So I've been feeling quite grateful for my life lately and somehow, like, super powered. Not really, because, uh, things are going well, which, uh, they're definitely going pretty well in terms of, like, the business and everything like that, but. It's more so that I feel like I'm living an

authentic life. I feel like the stuff that I'm doing is all oriented in the direction, and I'm working on a like a essays and a course around this and stuff like that, but it's essentially like, do you know what you're trying to accomplish? And are you heading in that direction? And what I've been trying to do is like, hack together these systems where I say no to things that are not in the direction of these goals. And one thing that came up in a conversation with my

buddy recently is, um, the cool thing about this. Oh, and I also got this from a David Perell analysis of a bunch of Seinfeld stuff. It's like fail at things that you wanted to do anyway. And it really doesn't feel like failure, right? Because I'm doing a lot of different stuff. Maybe too much sometimes, sometimes not enough. But in general, I'm doing lots of different things. And the question is, how many do you say no to? And what is your criteria for saying yes or no?

And what I've come to basically figure out is that if you look at that Seinfeld thing, he basically said, fail doing it, doing what you want the way that you want. And this really resonated with me, because I think about people who are kind of chasing other people.

They're chasing like a high school bully, or they're chasing like their boss or whatever, some, some competitor for some reason, and they're chasing them, and they might actually not be thinking about what really drives them, what purpose they are chasing, the thing that they actually want to do deeply. That's part of their own personal identity. Instead, they might be going, well, the thing I'm doing isn't working. The thing they're doing is working, so I'm going to copy them. Or if

someone gives them advice and they're like, hey! You know, this thing that I'm doing works really, really well. You should do that. And one thing that comes to mind around this is Alex Hormozi. He is, like, obsessed with making money. Him and his wife are obsessed with making money, which I respect. They're also way younger than me. So they're like in their late 20s, early 30s. Or maybe he's like mid 30s or whatever, but he wants to

crush business. And guess what? He is crushing business. So what I'm going to do is take some of that DNA from crushing business and try to sprinkle it into my stuff. But what I won't be doing is making the mistake of trying to be Alex Hormozi and switching my focus and my telescope to point towards this distant thing of like, oh, I want to make $100 million, and he doesn't want to make $100 million. He wants

to be a billionaire. I guarantee you that's his goal is like $1 billion in revenue or wealth or something. So the thing is, you have to know what direction your stuff is pointing in, and then you can more easily say no to things because are they going to

get you there sort of accidentally? And that's the way I've sort of oriented things, is like, I want to do things so that I can accidentally make money from them, like do stuff that you would do anyway and hook it up to a stripe page and hope it works out. And I know that's not the way to make a lot of money. And I know you should, like, go all in on a thing that's like, got a giant market and, you know, you spam the hell out of the ads or whatever, and you just try to make

all your money from that. But I feel like it's just I feel like it's a bad game. And another one of my favorite things, which I've mentioned here before, is like, the worst way to lose is to win playing the wrong game. And so this is all sort of like flowing into that same thing. And I've been happy with the newsletter lately because I've just been more of my authentic self, and I feel like that's something

I just want for everybody. I want everyone to be in tune with who they are and what they want, and have them actually chasing those things and talking about those things, talking about it is super important. It's like I'm really worried about people becoming invisible after AI, because AI is going to be finding the coolest stuff and the best stuff. Well, if you're not saying anything, AI

is not going to find you. And also, if you're not saying anything, you might not be thinking the things enough, or you might not be in practice enough with coming up with the ideas, crystallizing them, and writing them down. Because I'm not advocating that people try to become successful authors, or they try to become successful streamers or YouTubers or

anything like that for the purpose of being successful. I'm advocating that they do it because it will make them feel good to know themselves and to articulate themselves and to emote themselves. And that's why I'm obsessed with this whole thing. That's why I'm obsessed with this human 3.0 and all the stuff that I'm talking about. So anyway, I hope you've noticed the difference in terms of me.

But really, the thing I'm trying to do, the whole purpose of doing this for myself is to like, battle test it so I can help other people do it. All right. Stream of consciousness, collection of thoughts on US politics. Just skip it if you don't like politics. But I basically lay down a whole bunch of stuff there. Oh, this is just become a member. Yeah, yeah, become a member if you can. I would appreciate it. All right. CSA held its first. This is, uh, security stuff here

or it's the new stories section. But, yeah, I always start with security. CIA held its first AI security tabletop exercise, over 50 experts, and they're looking to simulate responses to AI security incidents. I love that they're thinking about it. Extremists in the US are using AI to spread hate speech, recruit and radicalize faster than ever. And they got a couple of examples here. President Biden using using racial slurs

and Emma Watson reading Mein Kampf. I kind of want to go watch these, but I also don't want to Google for them. So yeah, I do not need that garbage in my YouTube feed. The US has banned the sale of Kaspersky anti-virus software, citing security risks from Russia. And they're basically using strong language here. They're like, do not use this stuff, stop using it immediately, get off of it. And I think they have a deadline in July, and there's a new set of high security vulnerabilities in Asus.

Routers let you get complete control without any user interaction. And you want to patch these immediately. And the Richter scale is 9.8. Thank you to dropzone for sponsoring Dmitri Alperovitch. Yeah. Dmitri Alperovitch why is that difficult? That's not difficult. Imagined a detailed scenario of China invading Taiwan in 2028, focusing

on aerosol strategy through the rough Taiwan Strait waters. This one is fascinating to me because I don't have strong intuitions myself, I. I don't have strong expertise myself in like this whole naval warfare island attack type situation. It's just a it's not something I've ever studied. I don't know the stuff, but what's really fascinating to me is that the super, super experts on this, they disagree with each other, which reminds me a lot of the AI

situation there. I do have strong intuitions and some expertise and a lot of expertise in some areas, and some expertise in others, but it's like I feel way more grounded and solid and confident in at least my intuitions on the AI side. But here I have no idea what's going on. And in both cases, the absolute best people, which we're supposed to be looking up to, they do not agree with each other and they disagree massively with

each other. Tyler Cowen had someone, this woman from she was Bulgarian on his podcast recently, and it was so good and her view was completely different than so many other people. So I'm watching like Peter Zeihan. I'm listening to her. I'm listening to the Goodfellas, which is a podcast of a bunch of security and like really smart people.

And I'm trying to triangulate between all these different people and basically come up with my own thoughts and opinions based on, you know, not being an expert on it. I just I find it so interesting that they can't agree. Oh, same with, um, same with Ukraine. Right. Vastly different predictions on whether or not it was going to happen in the first place, when it's going to end, vastly different predictions. And that excites me that that massively excites me because

I'm like, there is signal in here somewhere. The question is, is it available second? Are people not getting the signal or are they misinterpreting it? How are the best experts wrong? And it raises the question of like, what else are

the best experts wrong on? And I just like to have that constant tension and questioning in my mind as I think about everything, including the things that I. I believe the strongest US is moving to limit investments in China's AI, semiconductor and quantum computing sectors to curb China's tech advancements. And now they're looking at private equity venture capital funds. Great stuff. And then I say, yeah, cool. But we need to be building a lot more energy production.

Look at this. Look at this graphic from Leopold. This is from situational awareness, which I just printed out, by the way. It's like, I don't know, two inches thick front and back. It's it's a beast. And I'm just going to keep it in my car and read it. Uh, whenever I'm parked and whatever, listening to podcasts. I won't do them both at the same time. But anyway, I'm going to keep it in the car and read through it. I've already read a whole bunch online and of course,

watch the video, but. Yeah. All right. National. Yeah. Nationwide armed militia from this guy named Jake Lang, who is a January 6th writer. And they're using telegram for coordination. And this is a wired article study says staring directly at the camera during online job interviews can significantly boost your evaluation scores. And this is from University of Hiroshima AI agents and the Raz Revolution results as a service discussing the evolution from SAS to raz, emphasizing the role

of AI agents. Yeah, I like this a lot. It's like abstract. This is exactly what I'm building with a project called substrate, which you're going to hear a lot about here before too long. But this is exactly what I'm building. You make a query and agents and a whole bunch of data work together in a whole bunch of steps, and the output that you get is the result of their work. So if you imagine one idea I have here, which is super exciting, is I'm going

to build intelligence reports this way. So on one hand, you've got a whole bunch of sources that are maybe they're hard to find, maybe they're not, but there are many of them. And it takes some effort and curation ability to to pick the best ones. Right. You bring that together, you turn that into a giant summary or a giant context file, and you basically start producing intelligence artifacts out of this thing. And that's what I like about results as a service. Okay. You you ask the question,

what is going on in Ukraine? What are the latest global trends? Where should I invest my money or whatever? And there could be hundreds of operations that happen, multiple different eyes doing multiple different things. But really, it's not even so much the AI until the end where it's like putting it together and telling the story. It starts

with more legacy tech. It starts with all this collection, all this correlation and unification of this data and getting it into a thing that's ready for the for the actual AI. And having brushed up against intelligence a little bit when I was in the Army, I was attached to an intelligence group and got to see a little

bit of how things work. Very little. I don't want to oversell it here, but I've also read a whole bunch about it and seen a whole bunch of this stuff in on the Intel side and also on the military side, and it comes down. Oh, I also built a program around this concept at Apple for the security org. Right. And this is a product that's still being used today inside the company, which I'm very proud of. But I

use these concepts there. It's like data information intelligence. And the intelligence piece is the one where there's intelligence, human intelligence going into that effort to turn this into a narrative, into a story for decision support. So at the above intelligence, you have decision, right? Or maybe maybe it's not above, maybe it's, um, to the right or whatever. But it's like the purpose of that intelligence is to distill down. So the the general is not looking at log files. Right.

Which is like data streaming in. Okay. You have information. It's like a summary of the log files. And this might not be a perfect analogy, but it moves up and up in layers of abstraction with the entire purpose of a human making a decision. Right? So if you look at results as a service, I'm thinking intelligence as a service is what I'm thinking. That's the first thing in my mind goes to there's a whole bunch of different implementations of this. Like it could be a a

business decision. Do I cancel this customer? Do I refund this customer? Do I do whatever? But either way, it's a whole bunch of legacy things happening with collection and stuff like that, and IT systems and data sources and everything. And then there's this thing that traditionally human is done, or like Deborah or Chris or Raj or whoever it is, they look at all the data and they say, you know what, we are going to issue a payout for

this insurance policy, and that is the decision part. And the result is a yes or no. Thumbs up, thumbs down. And that gets us to this results as a service. Right? So ideally you could just take all your different data, all your different decision points, throw that into context, throw it up to one of these services, which is one of these that I'm building as well. And in like 3 or 4 years, everyone's going to know this, and this is just going to be the way all software

is built. But I'm already building this and I've already got pieces of this that work. Um, and it's it's really exciting. I mean, this is essentially the future of software is doing this. You ask a question, you include context, and you get back a result, a very high quality result. So very excited about this concept and very good, uh, piece here on medium, which I wish nothing was on medium,

but whatever. All right. Sam Logan has a new post on how to get structured output from Llms, covering various frameworks and their pros and cons. Yeah, really enjoy this one. Talked about converting English to JSON and a bunch of other stuff, but yeah, enjoyed that. All right. Someone got invited to a project called Rebind where I versions of authors interact with readers about classic books. These are the

really cool ideas. I just love ideas like this. Coming to life Fabian Boff from Octo Mind explains why they stopped using laying chain for their agents, highlighting its rigid, high level abstractions. So I'm going to say something extra spicy right now. A lot of these low level libraries and frameworks for doing really difficult things like Rag, uh, or agents, a lot of these agent frameworks, things like

laying chain, they are extremely kludgy. If anything was like overhyped, I would say these first generation efforts at these things are massively overhyped. And it kind of goes back to this results as a service thing. Results as a service is kind of the future of this, because it's not asking you to mesh stack together these Jenga blocks of like really kludgy code. Because here's the problem. And it kind of applies to, um, to, uh, what are those called, um, notebooks.

It kind of applies to notebooks as well, because notebooks like notebooks and agents and in laying chain, they make great demos, they make great demos. But when you try to take this stuff and build a production app on it, you throw real scenarios into it, you throw real data at it and it just falls over. And what universally, almost everyone I've seen, and this is definitely what I've done.

You have to rebuild the entire thing from scratch using smaller parts, which I've always been obsessed with this smaller parts connected together. So it's like it it's necessary for AI as well, which is why I'm building my very own, like complete stack on this, uh, which is called substrate. And I'm basically using those components to build apps. And each one of the components is highly trusted and highly rigorous in what it takes in and processes and what

it outputs. So it's dependable all the way through the pipeline. And if you don't do this and you just start piecing together laying chain and agent frameworks. And don't get me wrong, I am not like, uh, being negative about these things. Laying chain has done amazing things for AI and these agent frameworks Autogen crew AI. They're doing amazing

stuff too, like no question. But as someone who's actually building this stuff, the reality is completely different when you start throwing production data at this stuff because you need consistency, you need trust in these systems. And, uh, pretty much every one that I know, they started building with these bigger blocks. And then they got a few months in and they had to tear it all down and start over, building their own individual components. And so I think that's

where this is all going. Um, and eventually that's going to merge into results as a service, which is kind of like the opposite, actually. It's more like somebody did build that back end, uh, dependable infrastructure, and now you're just getting the results from it. Um, and the way to know if it works is does it work? Do you keep getting the right answer? And if you do, you'll we will, as a society, build trust in that

over time. And ideally the thing will also be it will also be publishing information about its components and the sources that it used. So there'll be like some transparency there. Uh, that will be part of. The quality of the service is if you can actually inspect it and inspect like, okay, which models did you use, which data sources did you use? Right. Because if you're just like, no, trust me, because I'm awesome and you like the results and don't worry about it,

for a lot of people, that won't be enough. All right? My buddy Evan Özcelik and long time Ulr, he argues that our obsession with convenience is making us less human, and he wrote this on his blog, Snake Eye Software. Solar generated a fifth of global electricity at midday on the summer solstice. Solstice? Yep, a fifth of global energy

at midday. On summer solstice. That is interesting. I feel like, I don't know, I'm not an expert on energy either, but I, I really just wish we would go way heavier, way heavier on solar, build, 100 giant mega solar farms or 500 of them. And just go head to head against China on solar. Because nuclear reactors, I understand that they're really good energy and I understand they're awesome and

I understand all that, but they're very complex. They are fundamentally dangerous, even though I know they're way safer than they are now. It's possible to make them safe. Here's another reason they freak people out. So there's going to be so much more friction to trying to get one or 2 or 10 of them built. And we are in a race where speed matters and time matters. So China can just build a whole bunch of those because they have nobody to ask. They're just like, move, move.

We're building stuff here and everyone's like, yay, China! Like they have just this massive advantage. Well, we can we can move fast too if we do solar and it's not freaking people out. So let's just go build 500 mega solar plants like Elon has been talking about forever, and just jump way ahead, way ahead of the game and still do nuclear, right? And you know where that makes sense and where we can. But don't let it

slow us down. So when I'm talking about using SSH as a pseudo replacement, uh, basically you can use it to execute commands as another user. So why not use that permission model instead of pseudo? Interesting. Apple is renaming Apple ID to Apple account I like it. A group of 17 secondary schools in Southwark, London, are going smartphone free to combat the negative effects of phone use on students, and I really love that Jonathan Heights work here is

spreading so fast. He put out his new book. He's been blasting it all over podcasts. And I just love that his work is catching on. And Gavin Newsom is now going to ban, uh, smartphones. Or he's trying he's trying to ban them. And likely this is coming from impact from height as well. House prices are surging again with global index up over 3% year on year. Insufficient sun exposure is now a serious public health issue, potentially causing hundreds of thousands of deaths annually in the US

and Europe. Isn't this like cheese and eggs and milk and I don't know, so many, so many things. It's like, that'll kill you. Well, everyone stops and it's like, yeah, if you don't do that, that'll kill you. So I can't wait till Huberman figures this out for us. Uh, reads all the papers, uh, invites all the experts on. And I know a lot of people are starting to doubt Huberman and stuff like that, and I'm. I'm getting a little more cautious with him. I'm not sure why,

but I'm getting a little more cautious with him. But I don't know anyone else who is looking at these papers and doing this great analysis. And I don't fault people for being wrong sometimes I don't, I fault people for being overconfident, never looking at themselves, never going after their own errors, never trying to self-correct, and not being analytical and pointing that lens at themselves. And I feel

like Huberman is good at that. I would like to him to come back and do some episodes on, like, here's stuff I got wrong and why and why I'm going to fix that in the future. But anyway, he's the best there is right now. And what I would like to know about the sun thing is, I feel better when I go out in the sun in the morning, and I'm not sure it's only because of the light going in my eyes. I feel like there's something about walking on the ground, absorbing sunlight, and a lot of

this feels very emotional. So that's why I actually want to see data. Like, I don't want this to be based on vibes, but meanwhile I'm going to do it based on vibes. Seems like a net good. Oh, in one caveat, there is, um, between 10 and 2 is like the worst time because your UV index is the worst. But like super early sun is supposedly fairly safe because the angle of incidence like coming into the atmosphere. So evidently the UV is pretty low, but the visible light

I think is still much higher. I'm not sure if those are correlated, but visible light is going to, uh, set your clock, your circadian rhythm. And I feel like there's just other health benefits there, too. And anyway, what that study was basically saying is that a lot of people aren't getting any sun, they're not going outside, they're not doing anything. And I wonder how mixed that is with like other things like exercise or whatever. But obviously they try to filter that out. But the the idea

is basically people aren't going outside and they're depressed. And guess what? There's probably a connection. They're having more positive experience in life is linked to lower odds of brain disorders like Alzheimer's and lower cognitive decline. And, uh, got this secret Ted Cruz funding documents slightly redacted. Cut this, um, this guy over here called Capital Press, and he's basically like a watchdog, like a one person, like journalist group monitoring,

like the goings on of Congress people. And it's not in, like, a stalkery nasty way. Uh, or a legal way. I think it's more like Osint. Like what? Things are being dropped out there in the public, you know, little clues that can be hoovered up and, uh, and checked out. And this is one thing I'm really excited about. This is another thing I'm building in substrate is like an intelligence collector, uh, which will allow me to do intelligence reports like the one that I did before. And like this,

this guy is doing so. Yeah. Before. Before basically, this was spy agencies, right? Journalists and spies were able to do this because they had the money and they had the training. Right. Well, I, I is cheap. It has a lot of knowledge about how to do things. It also has knowledge about how to turn those things into a story and see patterns between them. So I'm going to build something really nasty here. Really, really cool. Nasty.

I mean awesome. Uh, so so I can have like a daily intelligence report of like, here's what the smartest people in the world think about this, and here's where their ideas diverge from each other. Here's how their opinions diverge from one another. Here's where their ideas point at the exact same thing. And I can do like a heat map of like, opinions and like, I can't wait to mess with this thing. All right, in our solar eclipse photo competition. Look at this thing. This thing is insane.

Look at that. Look at this. That is so beautiful. Purple. It's got the solar halo. And you got a plane. Oh. Second place. Yeah. The plane one's way better. Sorry. Yeah. Look at this. Purple. Oh, man. Got some, uh, vapor trails there. Yeah. Very cool. All right. Ideas. Toddler intelligence to PhD in around four years. That's what, uh, Mira Murati just talked about on a podcast. Very interesting. I've got this thing. Uh, just don't call it AI if you're tired of hearing the word, I have a solution

for you. Forget. I forget the word. Forget everything about it. And now just imagine the addition of hundreds of billions of little helpers that people and companies can use to do things like white collar work and writing and editing documents, responding to emails, doing customer service, making sales calls, etc. and taking ideas and turning those into full presentations, books or films or whatever, and millions of other tasks that otherwise would have not been done at all or would

have been done slowly or poorly by humans. And we're about to have that. That's what we're about to have. We're about to have billions of those things, doing tons of stuff. I mean, just mountain loads of stuff that has never been done before or was being done worse

by humans. Maybe we should call these things helpers, or maybe we should call them whatever, but they're definitely going to be very helpful, especially to employees who want to do ten x the amount of work, or 100 x the amount of work, or 1000 times the amount of work that they were doing before, and have it be done in a more consistent way or with higher quality. And remember, the bar for a human work is low. It is very low. If you've ever worked anywhere, if

you've ever worked with anyone. We are fallible. We make mistakes. We are inconsistent. And if you ask a whole bunch of experts how to do something, or you have them actually do something, the quality level will be vastly different and the average worker is just not that good at their job and they don't really care that much. I mean, that's just a fact. Or I would say maybe not the average one, but like right below the average and

there are millions upon millions of those workers. So we're talking about replacing that with better versions, more scalable, more consistent and more upgradable, like instantly upgradable. So that's what we're about to have billions of those. But don't call it AI. Nope, nope, don't call it AI. Because AI is annoying. It's an annoying word, so just don't use it and you'll be safe. All right. Discovery web check. Free and open source tool. Oh, this thing is so cool.

This thing is so cool. Yeah, I was playing with this earlier. Watch this. All right, so first of all, you can download it and run it yourself. But they also have a web version. So look at the website websites. Cool. All right. Is this cool. This is cool. I mean, is it valuable? I don't know. But I love it. I mean, it's got so much value in here. It's just I'm not sure how I'm going to use that value.

Probably parse the whole thing and ask AI to. Tell me what I need to pay attention to and prioritize it for me. Give me a turn into a task list. That's what I probably do anyway. It's really cool. Hayes Labs automated red teaming against Llms Agentic LLM Vulnerability Scanner open source tool for fuzzing and stress testing Llms with customizable rule sets. Incogni I use another one I think I use delete me is very similar to this service that removes your personal info from the web to block

spam calls and protect privacy. Confusion is a muse. This is a really cool little article. Talk about. Yeah. You don't want to get rid of confusion because it it speaks to you, makes you smarter. Complexity expands to fill the space available. Mhm. Love it. The winter of content how Game of Thrones changed media driving traffic and homogenizing journalism. A new salary database created from viral videos. The internet is now for bots, not humans, and an impossibly thin

fabric can cool you down over 16 degrees. I hesitate to show these things unless I can buy them, but this one was cool enough. I made an exception and the recommendation of the week. Get a copy of Meditations by Marcus Aurelius and keep it by your bed and read some before you go to sleep. Two major benefits one. You'll fall asleep faster. Two you will fall asleep feeling grateful for your life. I think that's what stoicism does for me, and I hope it does it for you

as well. And the aphorism of the week inspired by love, guided by knowledge, inspired by love, guided by knowledge Bertrand Russell. Unsupervised Learning is produced and edited by Daniel Miller on a Neumann U87 AI microphone using Hindenburg. Intro and outro music is by Zomby with the Y, and to get the text and links from this episode, sign up for the newsletter version of the show at Daniel miessler.com/newsletter. We'll see you next time.

Transcript source: Provided by creator in RSS feed: download file