Welcome to Unsupervised Learning, a security, AI and meaning focused podcast that looks at how best to thrive as humans in a post AI world. It combines original ideas, analysis, and mental models to bring not just the news, but why it matters and how to respond. All right. Welcome to supervised learning. This is Daniel Missler. All right. In this one. What have we got here. This is episode 429. So llama three came out last week and it was
super impressive. I'm getting more and more use out of it, actually. I'm finding it's doing a lot of stuff as well as GPT four, and sometimes it just completely fails. And it's worse than like GPT three. So I think that's the case with all these different models. Like, you never know, like how good it's going to be at certain things and how bad it's going to be at other things. And what I really hope we get before too long
is like this really robust testing framework. So we know what things are good at and what they're not good at. And we could just have this continuous testing because a lot of the benchmarks that are used to test, you know, to show the quality of models, I feel like people are actually building their models to score good on them, and they're pretty useless. There's also this cool bolt AI application, which this link is actually bad, I think. I think this is the wrong thing, but if we do bolt AI,
this should actually pull us up into it. Yeah, this is the one. So this is a pretty cool interface for actually dealing with working with LMS. It looks almost like ChatGPT, but it's a standalone app and you could use all your different models, right? So you got like you could use GPT four, you could use local models, you could put in your own model. This is just really good. And see here you got different commands AI and line. Yeah, it's just really powerful. I like this
little lap here. So that's one one nice app to take a look at. Again it's called bolt AI. That link I have here, like I said it's not the correct link but whatever. Let's see here. Yeah llama three really impressive. At the seven B level there's also an eight B version which I don't really mess with. I
only messed with the 70 B version. By the way, the 70 B version actually all the different versions of llama three are also in fabric, so you can use fabric with local models of course, and that includes llama three. And one thing I've noticed is that it's way less restricted. It actually will tell you a lot of stuff that llama two would not, and a lot of the other pinnacle models will not talk to you about at all.
It's not quite uncensored, but it is a lot more similar to an uncensored, I would say 50% there or something. And of course, there's already dolphin versions of llama three that are uncensored. So, um, not just dolphin, but other versions. So definitely take a look at that if you need that. And one thing that's really interesting to me is like how good llama three is getting or local models in general, because Apple is now releasing open source models. Snowflake just
released an open source model. And here's one thing I really think is interesting about this. If you think about the daily tasks that humans go through every day with like, oh, find me a car, get me a grocery list, give me a really good workout, play the best song for right now. You know, get a present for my partner because it's our anniversary or whatever. All these tasks that are like everyday things they have, this is a hypothesis, but they have a limit to how good they need
to be. Right? So if you exceed that limit of like, they just get the most amazing flowers every single time they pick the best song or music every single time and it's really good. Then if that was GPT five that made it that good, then GPT seven isn't really needed. And if local models exceed what GPT five could do, which of course I think they will within a year
or two or whatever, however long that takes. The point is the models keep getting better, including the local models, and that a task that a human wants to have done kind of hits a level of diminishing return. And I think for a lot of human tasks, we're about to hit that level with our pinnacle models. And then you have your local models that are like lagging behind it. Once that exceeds it, you have local models everywhere that
could do most tasks. So for example, home automation, home management, automation of tasks around the home. Like I said, groceries, everyday things, those things can all be done almost instantly, basically instantly and locally on the device because the local chipset will be good enough, or there'll be like a local chipset in the house somewhere that you outsource to
or whatever, but you don't need pinnacle models for those. Now, it could be that some of those tasks would actually benefit from a much bigger model, but I think so many of them actually won't. You won't need them. And that's really exciting because when local models can. You most of what we need. That's an insane world. And my buddy Joseph Thacker keeps talking about this thing. It's like,
why not just have it in everything? And he keeps talking about this cool idea, which I want to touch on too, which is this concept of an oracle, which I've always wondered about too. It's like if you went back in time, how useful would you actually be? Could you actually describe, you know, chemistry or, you know, satellites or combustion engines or any of this stuff? It'd be amazing to have like a little soccer ball that talks
to you. And inside of the soccer ball is this model, and you could somehow get power into it or whatever. But this one little thing, this tiny little device has something like Lama three on it, which is a large percentage of all human knowledge ever. And it's local, doesn't need to call out to any cloud. And you can put that in a soccer ball or a park bench or anything like that. It's insane to me that you could basically bootstrap humanity to some degree with something that's
completely local on a chip. All right, more stuff going on prepping for RSA. Definitely come by and say hi. I might be crazy busy or distracted or socially awkward at the time. Whatever. Just come say hi. What have we got here? We got hugs, waves, finger guns, always appropriate. Fist bumps, whatever. Whatever you got I should be comfortable with. Last few talks have gone really well. There's something to be said for speaking, to share an idea rather than
trying to, like, do a presentation. I've crossed over for most of my talks. At least I'm trying to cross over completely to this thing where I'm just I have this idea that I want to share, and the goal of the talk has nothing to do with the talk or the slides. It has to do with how many people actually get converted over to this way of thinking at the end. And when you are trying to do that, it makes the slides not matter. It makes like if
whether you stumble, your words not matter. Nervousness kind of goes away. And I've got a couple of talks already or essays about how to make that happen. Anyway, there's a breathing technique for like the tactical version, and then there's like this concept for getting rid of the overall problem of stage fright. If you simply have something to share and you are sharing it, that is not a stage fright issue. It's very strange. It's like, what are
you actually looking at? If you go up on stage and you are thinking about your talk in terms of like the slides, the slide notes, where are the cameras, where's the monitor, where are the people, who are they looking at? If you pull into yourself like that, you're going to get stressed. You're going to get nervous. The way to fix that is to go outside of yourself. What's bigger than yourself? What? What is outside of yourself?
The idea is the thing that matters. You focus on that and suddenly you're very calm because what you want to do is you want to convert your excitement away from anxiousness and into excitement around sharing the idea. And you could view this as a hack, or you could view it as just a better way of thinking about presenting. Period. And I think it is both, but I frame it as the second one. Updated the intro to the newsletter focusing on human to 2.0 transitioning to 3.0. Let me
know what you think of that. I put discovery actually into each section instead of having a dedicated discovery this time. And if you have time, you've got to go listen to this conversation between Tyler Cohen and Peter Thiel. I'm not a Peter Thiel fan in general. I didn't like a lot of his political views. While back they had this wide ranging conversation from like Star Wars to the Antichrist to the Bible to Shakespeare to economics, political theory,
just theology. It was an insane conversation. What I like most about it, because I've shown a few people and they were like, yeah, I don't know why you liked it so much. And a few people said, look, they just talked about a whole bunch of stuff that I didn't understand. Here's my trick for you, and this is why I was so excited about this talk. I don't often find anyone lately because I read so much who says a string of words and actually just a string
of answers to questions. And in those answers I'm like, what does that mean? Like Strachan, for example, Strachan was Stross was one of the people that he was talking about in this thing. And I'm like, I don't know Stross yet, I don't what does that mean? So I ended up having to go in Google and look up all these different things. Google more like talk to my Da and have her explain the stuff to me. And then I also put a set of books into audible as well. So now I'm going to go read these
canonical books that came from these thinkers. So I don't accept when somebody talks over my head because I essentially say, well, you can only do that once because. Now I'm going to research all that stuff. And first of all, figure out what you were actually saying just so it doesn't appear smart when it's actually not smart. But what I came away with after deciphering all the stuff and learning like a quick primer on this stuff, is I came away realizing that Peter Thiel knows a lot of stuff
and what I'm really attracted to. And I don't know if this is something where he changed or where I changed, or maybe both or something, but he doesn't seem bound to any sort of side. He seems bound to principles of like what he wants to see happen. And I think that is really powerful. And I talked about that. I think in a previous show. I can't remember where I talked about that recently, but I think it's really oh, it's actually in this newsletter actually. So I would say
definitely check this out no matter what. Check it out. You might not like it, but listen to what I was saying before and see if that helps you like it more. All right, so I wrote a new essay on the old paradigm of planning a career no longer working, and basically saying you should plan your career around problems instead. And I actually have released a standalone video on this as well, which is also going to come out as a podcast, so you can check that out on YouTube.
It should already be live. All right. Security, the House just passed a bill making it illegal for the government to buy your data without a warrant. Calling it the Fourth amendment is not for sale. This is really cool. That's bipartisan. It seems like it would. It would have had to have been. Not sure about that, but I'm pretty sure that's right. Sandworm. Notorious Russian hacking group linked to cyber attack on a water facility in Texas. The
ban on TikTok looks like it's actually going forward. That is pretty cool. Biden signed it. So that's bipartisan as well. A little bit of light in the darkness there might have got compromised among a bunch of other companies by another China based attacker, and a flaw and putty lets people look at a whole bunch of different keys or cryptographic signatures. Basically figure out your private keys, which is
no bueno. FBI Director Christopher Wray says that China is basically shifting to figuring out how to attack US critical infrastructure, possibly to go in concert with a move in 2027. So it would be like, you know, do these cyber or infrastructure things at the same time to kind of keep the enemy off balance? Marlinspike moxie basically says he's no longer affiliated with signal. I've not been a huge fan of signal. I don't like the fact that you need to restart it every 45 minutes. I don't like
being outside of Apple Messages. I prefer to have as few messaging apps as possible because I'm on Apple. That tends to be my favorite. Not a fan of WhatsApp, not a fan of telegram, so I tend to just prefer messages. I was okay with signal a little bit because it was moxie, but now moxie is out. I'm going to try to get people to talk to me on messages again. Sacramento International Airport had to stop flights because somebody deliberately cut the AT&;T internet cable that provided
internet to the airport. Tale scale now has SSH. DeepMind's boss says Google is going to outspend everyone in AI. You know, it's not about outspending guys. You've been able to outspend and have been outspending people for a long time. It's you've lost the ability to ship good products because you don't have vision as your primary guide. You have basically engineers making stuff and throw them at the walls. And then like three years later, if it launches, you
put it in the graveyard. It's just complete joke. I think you need new leadership, honestly, is what you need. And Stanford released a quality report on AI models and we did a micro summary from fabric. So basically I surpasses humans in specific tasks, not in complex reasoning. I think that'll probably change soon. US is ahead of model development. And yeah, pretty good breakdown here. Interesting argument about how
search engines, especially Google, can actually move elections. Google fired 28 employees for protesting and they actually fired some more people as well recently. More recently, Google has merged its Android and hardware teams. Netflix is using FreeBSD current for its edge network. That is surprising to me. I would have thought those packages would be old, not well maintained, and definitely behind. Yeah, so that's surprising in a good way. Reddit showing up a lot more in Google results. They
actually have some sort of contract together. I think they sent Reddit like $30 million or something. So something around there. Apple's Airplay is starting to come to hotel rooms. Hopefully Marriott's soon tiny S.A. is a budget. Friendly spectrum analyzer programming is mostly thinking this it is. This was a good essay, I enjoyed it. Broad introduction to AWS log sources and events and Gen Z is outperforming previous generations
at their age. This was also surprising to me. This article says that societal decline mirrors the death spiral seen in ants. Why everything is becoming a game. This blog by the way. Really cool. I love this blog. Gurinder. Yeah, this guy is awesome. Definitely want to subscribe. Their study found that jobs that require you to think a lot
or protective against Alzheimer's. I think that needs an H. Bayer is doing an experiment where they remove most of middle management and let basically like 100,000 employees mostly self-organize. And they're hoping to save. I think that was 12 billion, not 2 billion. That might be a typo. Unfortunate. I think it was $12 billion. I think this is really cool. I love the idea of 100,000 people with all these layers of middle management, where it's just a giant waste machine.
And who better to figure that out then? AI who figures out, okay, who's doing what, what functions actually need to get done? What who whose teams are structured in what way? How do they communicate with each other? What are the work products that are actually being produced? What are those pipelines look like for work being produced? You overlay that on the org structure, and you look at the budget and everything, and you look at the outcomes
and actually how fast things move. And AI is going to look at it and just be like a horrible mess. Why do you have all these managers? What are they actually doing now? Humans have already known that we didn't need all those layers of middle management like this. It's so common. It's just a meme. It's common knowledge. But AI is going to be able to look at it for any given company and just be like, here's why this is a giant waste. You know, Chris isn't doing
anything right. And he hands it to John, who's not doing anything either. And what is Mary doing? Like this makes no sense whatsoever. And it's going to recommend look, you need your experts and you're going to need you need your high level thinkers and you need very little middle management, which really is probably going to be AI agents very soon. And of course, at some point you can even use AI for the top and bottom pieces as well. But first they're going to come for those
middle managers. And I think it's a good thing the term brainwashing morphed into a blanket term for unconventional behavior. And this came from MKUltra seeing a lot of stuff talking about MKUltra. I wonder what the reason for that is. Okay, ideas and analysis. All right. I like I said, I talked about Peter Thiel. I'm not going to read this whole thing. This is a bit of an essay here. I think I might turn this into a standalone essay. Okay.
Let's see here. Okay. Recommendation of the week. Establish your ground truth in terms of morality and the society that you want to live in. And this is going to the essay above. Okay. Establish your ground truth in terms of morality and the society that you want to live in. Lock that in perfectly. You say, look, this is how I think the world should work and that is your morality, okay? And it's got no labels on it. Left, right, progressive, conservative.
Nobody cares. You don't put anything on that. You just say, here's how I think the world should work. Then you say, okay, I'm going to listen to any ideas, right? I'm curious how those ideas or those structures or those religions or whatever. How would those affect this world that I want to see come to pass? And then you don't discard ideas just because you disagree with something else that they say
somewhere else. So if people have like this idea, a constellation or belief constellation, you are free to pick and choose for where they they are smart in your mind, at least where they're smart about things, where they're dumb about things. I get so many comments where people are like, hey, listen, I absolutely love your stuff. I don't agree with everything, but I agree with you on most things, and I kind of feel like I'm happy when I hear that
because I feel like they're parsing everything. They're deciding because they've already done number one. They've decided what they believe. And I've had other people also say that over time they came to believe something that I also believed, which is cool. And sometimes I'll switch and, you know, believe what they believe. The point is, you don't have to accept everything a person says to pull wisdom out of what they say, right? Maybe they're only good at making
the best, you know, Reuben sandwiches. And maybe that's all I get from them, because the person's name is Tucker Carlson and they don't know anything other than sandwiches. That's fine, I will listen. I probably won't spend time like trying to seek out that content, but it doesn't matter. That's the point of this. Feel free to label people as bad overall or stupid, but realize it doesn't mean they're wrong about everything. So again, same concept. Be willing to
take truth from anywhere. And basically you want to regularly revisit your number one, which. Is your basis of how you think the world should work and then regularly re-evaluate, right? Pull in lots of different opinions and see if it changes. Number one and the aphorism of the week, best way to have good ideas is to have lots of ideas. The best way to have good ideas is to have
lots of ideas. Linus Pauling. Unsupervised Learning is produced and edited by Daniel Missler on a Neumann 87 AI microphone using Hindenburg. Intro and outro music is by zombie with a Y, and to get the text and links from this episode, sign up for the newsletter version of the show at Daniel Missler. Com slash newsletter. We'll see you next time.