Welcome to Unsupervised Learning, a security, AI and meaning focused podcast that looks at how best to thrive as humans in a post AI world. It combines original ideas, analysis, and mental models to bring not just the news, but why it matters and how to respond. All right. Welcome to unsupervised learning. This is Daniel Missler. So we added
a ton of new patterns to fabric. This week. We added crate threat model which basically you pipe in the kind of situation that you're worried about in it outputs you like a logical threat model. Based on an essay that I did a while back. Find a hidden message. This thing is so cool. You basically send in any
essay or article or whatever. You're not sure if you're being lied to, if or if the person is being disingenuous or whatever, and it'll actually tell you what they claim to be trying to say versus what they actually are saying. So that's a really cool piece of functionality. And I put a bunch of stuff through it. It's it's pretty accurate and have it even like fully tweaked
it yet create Ascii visualization. So this thing will actually create a visualization, a very rudimentary Ascii visualization of basically any concept. So you can send in like the meaning of life or the Dunning-Kruger effect or really sort of conceptual things, and it'll just draw a picture for you. So you can like explain something to a kid or something. So I thought that was pretty cool. You could do the same thing with Mark map, which I didn't know
it was. It's basically like a markdown version of a mind map. So you do the same thing. You take any concept, you send it into this and it builds you a mind map of those things and actually has links and everything, and you can actually expand and contract the mind map. It's really cool. And same for mermaid visualization. This is a JavaScript based, also markdown, but JavaScript. And you put it into one of the renderers and you could basically see a mind map or a visualization of
the thing. It's more rich though, than some of the other ones, so that was really cool. We also added early crew eye integration. So many, many thanks to Jonathan Dunn, because he's been working on so many of these pieces of client functionality, and we now have our first one working. It's like you run fabric agents, trip planner, trip underscore, planner, and it will actually run a trip planner for you. It'll ask you questions, where are you going? What do
you want to do while you're there? And it spins up a team of agents and actually does that for you. So that's now integrated and brand new. Just this morning. It's not even in the newsletter yet is local model support. So you can now run local model. You actually do capital L and then if you do like llama too it'll run. You have to install a llama and get it running. But that's like one command. And once it's done and you have the model actually downloaded, it'll give
you the result from a local one. So you're not even being charged any tokens. And this thing totally works. Just got that running in. The documentation is going as well. So thanks again to Jonathan Dunn for that work. And you want to just yeah, you want to rerun setup and restart your shell and then it'll work. All right. So my work this week wrote this essay called To Survive I we must become creators. Highly recommend this one. Especially for anyone who is in school or has a
loved one in school. It was kind of spawned by a post that I saw from somebody on Hacker News. In fact, I'm just going to click into this thing. Yeah, this post from Hacker News click into that as well. Yeah. So they're basically saying I'm about to finish my CSS degree in May, and I'm already sick of the tech industry. I'm at a loss. I know the job market could get better soon, but there's no guarantee that it will. And things are looking bad for us with less experience.
And basically just it's kind of sad. It's kind of a real situation that's going on. So I wanted to address that, and I think this is the way that people should be thinking about it. In my opinion, this is just my opinion. But I think that coders and lots of people like that are simply not going to have jobs, right? And some things will go faster than others. But do a quick little walkthrough through here. You want to read the whole thing, but basically what is a
creator versus a worker? So on one hand it's pretty simple, right? So its creators come up with new solutions to human problems. Workers are the people who execute on building them and building those solutions. And then like a corporate setting, it would be like creators are the people who determine what to build, how to build it, if it's a new thing, and then how to sell it in the current market.
And workers are the people who make that happen. Right now, there are situations like we talk about this draw the rest of the owl. There are some situations where it's not quite clear. If it's if you're doing a creator job or an execution job, right? Because if you're making something that's never been built, technically you're executing, but you're making something that's brand new, you're kind of creating, especially if you're doing that and you have your own company,
you're definitely creating. But the vice versa is also true. So it's like if you used to be a creator, but now you're creating things that kind of are kind of a commodity at this point. Well, now you're more of an executor than a creator. So it goes both ways. And here's the way to look at it. Basically, I say as the amount of creation goes up, the amount of execution will balloon massively, but it's going to be
AI that are actually filling those jobs, right? It's not going to be humans that are doing all that execution. Once AI comes online is good enough to do a lot of this stuff, which I don't think is going to be long. Humans are basically hard to train and retrain, and human competence is fixed, right? Our IQs are not going to go up that fast. I mean, we might get some improvement on our IQs from, you know, some
AI or some enhancements or nutrition. There's lots of different things can improve our our IQs as humans, but it's not going to go up nearly as fast as AI is. And ultimately humans are expensive, sensitive, and need constant replacement. And I won't have these problems, right? It's going to scale with creation. We can make as many AIS as we need to do a particular job. When you upgrade an AI, it's pretty easy to roll it out, right?
And they aren't conscious, they don't get tired, they don't complain, they don't go to HR, you know, to start investigation and they don't quit. And they're getting smarter at an insane rate. So what do you tangibly do? Right. You know, what fields do you go into? How do I steer my kids towards something or away from something? And should
I still go to college? So my advice is to focus on creation, on ideas, on making new things, focus on problems that exist in the world that need to be solved, and start thinking about what you could do to solve those problems, what you can build that solves those problems, right? And basically creating new things to solve human problems is like the immortal job skill. It's the best job skill that you can have. And then you want to think of tech skill and specifically programming in
AI as the new reading and writing. It's essential. Meaning if you're not good at them, then you're probably not going to succeed. You're probably going to be like an executor. And. Even as a creator who's actually giving these this work to eyes, you still need to be decently fluid, fluent in programming in AI because they are the language of creation.
You need to be able to explain yourself to AI, and that that largely is going to be in the language of like prompting and speaking and thinking and writing. And some of that is going to involve programming, right? It'll be less and less as time goes on, but you should just learn that stuff. The other thing is you want to get trained on this stuff, and I think you want to get trained in two primary areas.
And I do think college actually still matters, because it's a way of filtering the quality of a person when they don't know anything else about you. So I think college is still going to matter for a little while longer, maybe like a decade or something. But when you do study, whether it's in college or self-study, you want to be studying two branches simultaneously. One is a hard skill that's valuable in the market, like AI or something. AI or
development or something like that. And number two is training on how to think. This one is even more important, but you don't want to do it at the expense of the first one. You want to have something, a hard skill to go along with it. But as time goes on, being able to think especially about problems and solutions is like the most important thing. So liberal arts, education, philosophy, logic, dialectic,
all these things become way, way more important. They they become the things that separate the creators and like the owners and the, the top people from everyone else. So you really want to focus on both of those things in your education. You don't want to think so much about fields or companies. You want to think more about problems and problem spaces, right? What are the problems that will go away versus what are the problems that are
going to stay with us all the time? And you want to go to where the problems are that interest you, and you'd be good at solving. And most of all, you want to get out of the mindset of being a worker and enter the mindset of being a creator or a builder. So you don't want to be the person building other people's ideas. You want to be the person creating ideas, and in finding the problems and building things that solve those problems. And that's the summary there,
and that's essentially the post. So I didn't think we were going to do the whole thing, but that was essentially it. All right. So that was that one also posted our sponsored interview with BlackBerry. The second one, BlackBerry silence with Ismael Valenzuela, and he is the Threat research and intelligence VP at BlackBerry silence. And we talked about a whole bunch of cool stuff there. So definitely go
check that one out. Going into security, researchers created a worm that exploits generative AI and spreads via prompt injection. So there's kind of a and they named it Morris two, which I thought was pretty cool. GitHub is now blocking commits with secrets and public repos. That's really cool. Biden is viewing Chinese connected cars as a national security threat, which I'm very happy about. Got to sponsor here. Collide.
Thank you to collide for sponsoring us. Military's Project Maven is now actively using AI to identify and strike targets, marking a significant shift from skepticism around doing this to actually relying on it. And they've already done this in Yemen, the Red sea, Iraq and Syria. Shot spotter who renamed themselves probably to get around some PR drama, is now called sound Thinking and uses hidden sensors for gunfire detection. And it basically reveals exact locations of those sensors. And
this data was previously unknown. Researchers found over 200 hacking services using AI. Things like bots like bad GPT. Cryptocurrency enthusiasts are being targeted with Mac malware. A group of hackers, including Rizzo my buddy Joseph Thakur, found some vulnerabilities in Google's AI and cloud systems, and they got 50 grand for that and a new vulnerability in hugging faces Safe Tensors conversion service could lead to supply chain attacks in
AI models. All right. Technology. Yeah. Nvidia's CEO thinks I will soon make coding obsolete. Yeah, like we talked about above. I think that's true. And we just talked about this creators and executors essay here. Waymo is expanding to more highways in LA in the Bay area. I can't wait till they come to my area. I absolutely loved my
experience with Waymo and Apple canceled the car project. I think everyone knows about this at this point, but they moved over 2000 employees to AI initiatives, which I'm super excited about. I really cannot wait until this June when they do the new announcements. It's going to be. I'm so hyped for those announcements. Basically, good AI that's close to you and actually on your phone is so much
better than amazing. I that's stuck in like an IDE somewhere in 2023, public tech companies out of 2.4 trillion to their market caps. Musk is suing OpenAI, saying that they basically got away from their altruistic nature and are just. Going after profits, something like that. I'm not sure if that's his exact claim, but it's something like that. Basically, OpenAI says the New York Times paid someone to hack its products to produce content matching the newspaper's articles. This
is wild. I haven't fully dove into, like, their exact claims in the in the lawsuit or whatever they're doing, but interesting plot point there. DocuSign has been using customer data to train their AI. People are freaking out about it. I think this is kind of like the wrong question. It's not about if someone is using customer data to train their AI. Of course they are. Everyone should be
doing that. The question is, are you training on like personal data, on sensitive data, on privacy related data in a way that customers would object to? Customers are not going to object to you understanding, like their behavior and stuff. It's like it's going to come down to like, are you targeting somebody or are you doing something that's unexpected and gross that that's the distinction. SpaceX just hit 17mbps
downloading directly to an Android phone. No. Now, this is like a new cell tower, essentially, but it's a cell tower that's actually a satellite. 17 megs download to a stock Android phone. Wendy's is looking to test dynamic surge pricing for food in 2025. Seems a little early to announce a strategy that you haven't implemented yet. Like, isn't that giving competitors too long to adjust? Yeah, right. What do I know about fast food strategy? Though? February and
January saw a resurgence in tech job cuts. Yes it did. They are continuing. I didn't think that was going to happen. And the 50 over 90 by Nvidia is going to be like 70% faster than the 4090. Makes me sad. I got two 4090, but whatever, 192 streaming Multiprocessors and 24,000 Cuda cores, that is insane. And a new study of 13,000 people showed. And this is in the human
section that long Covid. And going into the human section, a new study of 13,000 people showed with long Covid, they lost about six IQ points, according to the study. Political extremism is now one of America's top concerns. Edging out the economy and immigration organ is reversing its drug decriminalization policy due to overdose deaths and public concern. I feel like 2024 is the year of like, the pendulum swinging back against a whole bunch of hyper liberal policies
and attitudes. I just hope it doesn't swing too far and then, you know, then just keeps violent swings of the pendulum. It's just no good for anybody. California is proposing a bill to ban the homeless encampments near public spaces. Florida is experiencing a number of outbreaks of already beaten diseases. Basically, you've got people on the left and the right who are vaccine skeptics. They're not vaccinating their kids. And we're not hitting the numbers for herd immunity that are required
for herd immunity. And people are getting diseases like the measles that that have been, like, already knocked out for decades or hundreds of years or whatever. And alcohol related deaths in the US jumped by nearly 30%, almost 500 daily. And a neurosurgeon is using ultrasound to tackle Alzheimer's and addiction with some really exciting results. Had a crazy idea in the idea section here. What if AGI powered robots? What if we get those before we get autonomous cars?
What if it's actually easier to do AGI that is capable of like evaluating a situation? Maybe not. Maybe. Maybe they are the same thing, because the AGI that we will have built wouldn't be trained on car stuff, and Tesla stuff would still be the best ones anyway. It's just a possibility. It random thought. In other words, maybe it's easier to put a local GPT six level AI in a car and have it learn a whole bunch of stuff than to actually try to make the car autonomous.
I don't know, like I said, it's a random thought and Sunday was 30 years with my love two piece Combee and Dune two was insanely good. I've already seen it twice. Going to see it again. All right. Discovery. My buddy Jason Haddix put out a sick episode of his newsletter called Executive Offense, and lots of prompt injection and other resources about hacking. I definitely want to go check that out and subscribe to his newsletter. Do literally anything.
Great essay here. Caltrans has public data in CCTV, data in CSV, Json, text, XML formats. Got a deep dive into a git configuration here. How to get nmap to detect new services. How did this person decides if your website is worth a revisit? The internet feels fake now. Tyler Cohen shares his personal strategy around listening to music. I didn't agree with it, but I. I actually really enjoyed reading it. Apple's releasing Neuromancer as a ten episode
series on Apple TV+. I definitely agree with somebody who said this. I can't remember if it was Scott Galloway, but Apple TV is basically basically become the new HBO. HBO got knocked out because they're calling it Macs now and nobody cares. And so now Apple TV is like fire, like it's doing so well, and it's basically become the new HBO. It's like if a show comes on to
Apple TV, it's probably pretty good. Bad therapy argues modern therapeutic parenting is failing, leaving kids anxious and unprepared for life. Algorithmic thinking. I've ordered this thing. I've got the physical copy coming. I read the first one. Absolutely loved it. I was working at Apple. I actually remember I was talking to someone about it taking a walk, but absolutely love this book. And now it's version two or a second edition and it's got new chapters. Can't wait to
read it. It's on the way, and spending ten minutes on something is 1% of your day. And I think that's assuming, like I forget how many hours, 16 hours a week. They have some assumptions in there and the recommendation of the week. Ask yourself if you're primarily a nurturer, a creator or a worker, because it's my belief that creators are nurturers. And this goes back to the essay, um, people that help nurtures are basically people that help creators
or nurtures grow up and and become, uh, fruitful. So it's basically parents or nurturers. Um, artists are creators, entrepreneurs are creators, therapists are nurturers, etc. and I think that creators are nurturers are the roles that will be most resilient to AI, and they're also the most human roles that they're what we should be doing anyway. We shouldn't be like moving paperwork around and tilling fields and all this crap. Try to get this is my recommendation. Try
to get out of the worker mentality. My family is Lutheran, right? Hard work. My dad always instilled this into me. Like your reputation is based on the work that you do. So like I get this and I feel it deeply and I think hard work is noble and honorable 100%. But we were handed arbitrary jobs just because of the the escalation of tech through our lives. Right? When there's no clean water, you know, a lot of the work is trying to get clean water when there's no food.
A lot of the work is trying to get food. Like all those things get phased out as the tech gets better. And that's what we're about to go through. Again, an AI is going to be doing most of the old worker style jobs better, so we have to start planning our migration to a creator and a nurturer style economy. And we're all going to be hybrids for for a long time, but you want to try to migrate to
that as soon as possible. And until the people that you care about to try to do the same and the optimism for the week along those lines, the end of labor is to gain leisure. Aristotle. Unsupervised Learning is produced and edited by Daniel Miller on a Norman 87 AI microphone using Hindenburg. Intro and outro music is by zombie with the Y, and to get the text and links from this episode, sign up for the newsletter version of the show at Daniel missler.com/newsletter. We'll see you next time.