Welcome to Unsupervised Learning, a security, AI and meaning focused podcast that looks at how best to thrive as humans in a post AI world. It combines original ideas, analysis, and mental models to bring not just the news, but why it matters and how to respond. All right, welcome to unsupervised learning. This is Daniel Miessler. All right. What have we got here? Lots of content this week. Super excited for this episode. Uh, did an all text episode this time, which is kind of a callback to how
I used to do it. Got upcoming speaking at the Snyk conference in October, Cyberstorm in Switzerland and Blackhat in Riyadh. One tool in AI that you should be trying out, that everyone is talking about, including Karpathy and a bunch of other people, is called cursor AI. So it's cursor.com is the domain. I thought it was cursor AI, but it is not. But the big feature appears to be that it basically it looks and feels like VSCode, but what you do is you upload your entire repository into it.
I guess I guess technically VSCode could also see your all your code if it's all in there as well. But I think what cursor is supposedly doing well is it's taking all of that content and kind of using it in context to understand it better, not just the current file that you're in. So I believe that's the big feature. If I'm wrong about that, somebody correct me. Okay.
My work, a couple of massive episodes or essays that I put out this week, and I actually sent them out directly, which I only do for things that I think are really decent and also evergreen. So one of them is called We Were Lied To About Work or the Real Problem with the job market. I had two different titles actually, but it's basically why layoffs, hiring the job market, and work in general just really sucks right now. And I would say it's probably one of my top
20 essays ever. So highly, highly recommend that one. And I've got a new way to explain AI and specifically Llms to people, and I think this one is short enough. In fact, I'm just going to give the highlights of it. So let me take you into this one. So here is the basic concept five levels of LLM understanding. So the first level at the bottom is it's just predicting text like the next text token in a in a text sequence. That's all it's doing. It's not magical. Right.
And this is kind of the most common argument for why llms and AI is not all that special or specifically AI based on Llms. People are just like, look, it's just next token prediction. No big deal. If you pull away from that one level, it's like, look, it's just predicting the next item in a sequence, okay? So it's just next token prediction okay. One abstraction away from there.
And this all comes from this thing from Eliezer Yudkowsky who had this, this great kind of little statement on X that made me think about this, but essentially what it is, is for this level. Level three, it's predicting the next token in the description of an answer. Okay. So what Yudkowsky said was any predicting the next token is isomorphic with predicting the next token of an answer. Okay. And that is really, really powerful. And it wasn't the
exact quote, but that's essentially it. If you go one level above that okay, it's predicting answers to insanely difficult questions. So so here are the four levels so far. It's predicting the next token in a piece of text. It's predicting the next token third level. It's predicting the next token of an answer. Second level it's predicting answers. And the top level it seems to know everything okay. And
this all comes from this. Here it is right here it just predicts the next token literally any well posed problem is isomorphic to predict the next token of the answer. And literally anyone with a grasp of undergraduate comp sci is supposed to see that without being told. I don't agree with that last part. This part here, I don't agree with that. I don't think that's accurate. In fact, I think the vast majority of people that's very much not the case. Very much not the case. I would
say 95%. So I would say I disagree on multiple levels for that second piece. The part that I like is this here, which I highlighted and separated on purpose, literally any well posed problem is isomorphic to predict the next token of the answer. That is extraordinary. That is brilliant. That is the answer to the it's just next token prediction argument. That is, the immediate counter of that well posed problem equals prediction of next token of answer. So
that's that one. And basically this essay is a full breakdown of that. So it's a standalone evergreen essay about why this is a powerful concept and why you shouldn't trust that next token argument. Okay. Security. CrowdStrike did their 2024 report talking about how North Koreans have infiltrated over 100 US based companies in all sorts of important places like aerospace, defense, retail and tech. They didn't mention much
about Blue Friday. Not sure why that was. State linked Chinese entities are using cloud services from Amazon and other competitors of Amazon to access advanced US chips and AI capabilities, so basically they can't get the chips themselves. But you can go to AWS. Anybody can, or most people can. You go to AWS, you sign up and you just attach Nvidia chips to that instance and you start doing workloads. And so this is what China is doing. They're like, well, we can't get the chips, but we can use a
service that leverages the chips. So I'm sure Amazon and the government are in conversations about that trying to get that fixed. Cisco has patched multiple vulnerabilities, including a high severity bug in its Unified Communications Manager product. Thanks to Threatlocker for sponsoring. Two US lawmakers are urging the Commerce Department to investigate cybersecurity risks associated with TP-Link routers, citing vulnerabilities and potential data sharing with the Chinese government. So
kind of a Huawei situation. A mini Huawei Quarks lab found a major back door in RFID cards made by Shanghai Fudan Microelectronics, one of China's top chip manufacturers. I don't think this is a China influence story. I think this is just vulnerabilities and chips, and the impact here is the fact that it's smart cards for like office stores and hotel rooms and whatever. And it's a big company that does this. And lots of RFID chips and cards come from China. So just an impact due to
the size of the market type situation. What is this bouncing now? Now you're seeing my Diablo four chat messages. It's really important stuff I'm working on in Diablo four. All right. What are we talking about here? Yeah. Thanks to defender five for sponsoring. Next one. Researchers found a way to exfiltrate data from Slack's AI by using indirect prompt injection. US Navy is rolling out Starlink on its warships to provide high speed, reliable internet connections, improving operations
and crew morale. AI and Tech Anthropic has published the system prompts for its latest AI models, including opus, Sonnet and Haiku. AGI bot is a Chinese company and they just unveiled unveiled a fleet of advanced humanoid robots to compete directly with Optimus, which is the one from Tesla, and they're designed for tasks ranging from household chores to industrial operations. And they're going to start shipping supposedly by the end of this year. So like immediately. And Optimus
is nowhere near ready for that kind of timeline. So I'm basically anti-Chinese imports for both robotaxis and humanoid robots, because China is too far ahead and they're too cheap and I would say just too good. So I don't want to give them a head start. And I don't
like being anti-competitive against any sort of country. I don't like slowing pressure from the outside, but if this were India or Ireland, I would actually be okay with them applying pressure to the US but not China because they're too obviously a malicious actor who actually just like, wants to crush the United States in like all aspects, including the United States going away as an economic power. So
it's not like friendly competition. So I think we should actually just either tax the hell out of them or not allow them to function until until we have proper footing and like, can compete properly. And speaking of that, Tesla is hiring people to train its Optimus humanoid robot by wearing a motion capture suit and mimicking the different actions. You get like $48 an hour, but you got to walk over seven hours a day carrying £30 while wearing
a VR headset. That's a tough job. Waymo is looking to launch a subscription service called Waymo Teen, so this is basically to help parents not have to shuffle kids around. Although I'm not sure, depending on the age of the teen, should they have their own car, I'm not sure. But anyway, cool idea. I scientists developed by the University of British Columbia, Oxford and Sakana AI is creating its own machine learning experiments. Okay, let's back up. An AI scientist is creating its own
machine learning experiments and running them autonomously. I think this is where most innovation will come from. AI not just implementing tasks, but in doing new research. And I talked about it in a post there. Victor Miller, a mayoral candidate in Wyoming's capital city, has vowed to let its customized ChatGPT named Vic Virtual Integrated Citizen help run the local government. I'm actually working on how to articulate a
political platform for any level of office using substrate. Basically, define exactly what you want to do, how it branches out with problem strategies, most importantly KPIs and promises. So you could literally say, like I talked about it in the substrate video, you could say, look, here's how I'm measuring myself. Here are my assumptions. Here's what I believe the problems are. Here are my strategies for attacking those problems. And here are my specific projects that I'm going to
do to implement those strategies. And here's how much it's going to cost. Here's how I'm measuring myself. And here's the promise if I don't move these numbers by X amount, you should fire me in two years or in four years or whatever the term is. So I think this is where leadership is heading. Transparent descriptions of vision, strategy, KPIs,
and promises. Sean Ammirati Arathi, a professor at Carnegie Mellon, noticed a massive up leveling of progress in his entrepreneurship class this year, thanks to generative AI tools like ChatGPT, GitHub Copilot, and Flow wise, AI. So they basically the students use these tools for marketing, coding, product development, and recruiting early customers. This is what I've been talking about
with AI automation. If you were competing with a 95 out of 100 person before and they were a 95 because they went to CMU, well, now you're competing with a 130 out of 100 because they went to CMU and they're using AI for everything. And for me, in my own life, I read better articles because of AI. I get better ideas because of AI. Therefore I build better stuff because of AI. And this is all feeding on itself, just okay. And this all feeds itself and
just makes that whole top of funnel even better. And I believe that your options are to upgrade or lose. Basically, GM is cutting over 1000 software engineers to streamline its software services organization, streamlining by cutting out 1000 devs. The way I see this is they're actually just taking everyone out because they're like, this is a huge waste of time. We hired a bunch of duds. Let's start from scratch and only hire the best possible people who are probably
also massively augmented with AI. Yeah, so I got a whole bunch of other content killer cult members. This is my new, um, way of framing. This whole thing is like, killer cult members is what companies are looking for. It's kind of obvious that this is what startups are looking for, but I think corporations are going to look for this as well. So people talk about, oh, toxic work culture. Guess what? Companies only want employees who have a toxic
work culture. They want people who show up and are like, I am religiously dedicated to this. I will sleep under my desk. I will work as much time as I need to. I am fully dedicated to this mission. I wear all the swag in public. I talk about this. I think about this all day long. That is what people want. That is what hiring managers want. That is
what corporations want. Because that behavior, especially if you're on site, which everyone is pulling everyone back to be on prem now, if you are surrounded by other cult members with that sort of energy and keep in mind, this also has toxic aspects to it. There are massive downsides to this type of culture, but the downsides are not to the company. Really.
The downsides are mostly to the employees themselves as a trade off for doing this much risk to get much, a lot of a reward, potentially in equity or pay or whatever. So this culture is good for companies and that's why they are fostering it. That's why they're saying you must be on site. So for example, OpenAI OpenAI requires you to be on site. You have to work on site. There are some exceptions, very few. But in general they make a cool office. In a cool city,
they require the best. They hire the best talent. They basically tell you, or it's naturally implied that you must be one of these killer cult members. And then that is that energy which is synergistic with itself, builds and builds and builds and it produces the best products that people want to buy, and they get massive valuations. And that's capitalism and that's what works. What doesn't work is, okay,
let's not do that. Let's have a company culture that's good for employees, and let's make sure there's work life balance. And let's make sure, like all these different policies and it ends up with a lot of people not working, not doing good work. It ends up with A-players hiring B players. B players hire C players. You end up with a bunch of C players, a few B's and and some D players. And after a while, the corporation looks at their entire headcount, spend their entire human resources spend,
and they're like, I'm not getting value from this. This is not worth it. And they just fire everyone. And so when I see somebody as firing a thousand software engineers to streamline the software, I don't know exactly what's happening here. I'm arguing that what I'm saying might be happening here is definitely happening all over the place. And what they'll do is they'll go to zero. Then they'll find a players who are killer cult members and only hire those from now on. And here's the crazy part.
100 of those people might be worth 1000 or 10,000 C players who are interested in work life balance. Again, I'm not saying anything about like the total value of on society of work life balance or all of that. I've got completely separate ideas about that. Actually, in the essay We were Lied To About Work, you can see what I really feel about that whole thing. The point is, this is what companies are looking for. If you want
to get hired, you have the answer. Meta is using AI to streamline system reliability investigations with a new root cause analysis system. System combines heuristic based retrieval and large language model based ranking, achieving 42% accuracy in identifying root causes at the investigation start. I didn't look to see or it wasn't in there, how that compares to humans and how fast, because that's the trick. Accuracy and speed and of course, cost. AI companies are shifting focus from
creating godlike AI to building practical products. Who knew? So I don't think this is a bubble pop. I think it's a natural maturity of brand new tech that just came out. And because people are still figuring this stuff out and it's basically day one, like AI hasn't even gotten good yet, hasn't even started to get good. Canada is slapping 100% import tariffs on Chinese electric vehicles starting
October 1st. We were just talking about that, former Google CEO Eric Schmidt predicts rapid advancements in AI, the potential to create significant apps like TikTok competitors in minutes within the next few years. I know what he's saying, but there's a difference between being able to run an app at scale versus creating it. Of course, he knows that better than I do or most people. But important point to make. Claude 3.5 can now create icalendar files from images.
And this guy Greg's ramblings shows how you can use this feature to generate calendar entries by snapping a photo of a schedule or event flyer. AWS CEO Adam Selipsky predicts that within the next 24 months, most developers might not be coding anymore due to AI advancements. He says the real skill shift will be towards innovation and understanding customer needs, rather than writing code. 100% agree. Although most
developers in 24 months. This is the type of thing where it happens way slower than you think and then way faster than you think. At the same time, most in 24 months will not be coding anymore. That is wildly off, wildly off. But people who are like, oh, it's going to take forever. It's going to take, you know, three years, five years, seven years, ten years, 15 years. They are also wildly off. Chinese companies have ramped up imports of chip production equipment. $26 billion in the first
seven months of 24 on chip production equipment. They need to equip 18 new fabs expected to start operations in 24. There they are all in on this because they see the world turning against them and shutting them down. So they are all in on this. Question is, can they actually, even with all the fabs and the equipment and you know, the factories, do they have the know how to actually do what TSMC is doing? My current understanding from current
knowledge combined with like the book chip. Wars know they don't have the knowledge. So even with all that stuff, it's not going to be enough humans. Cisco's laying off 7% of its workforce, around 6000 employees, as it pivots towards AI and cybersecurity. McKinsey's new study reveals that business leaders are missing the mark on why employees are quitting. They say companies are focusing on transactional perks like compensation and flexibility,
but employees are actually seeking meaning. Belonging, holistic care and appreciation at work couldn't have been better timed with this week's essay. 24 brain samples collected in early 2424 brain samples in 2024 measured an average of 0.5% plastic by weight. So a brain is multiple pounds. Is it £3 or £2? I can't remember 3 or £4, something like that. It's heavy. You can feel this thing right. You can feel your head. It's heavy. Half a percent. Okay. Let's just type in here.
What's the half a percent of the weight of an average brain? 1400 grams. Half a percent of that would be seven grams. Okay, I put 25g of coffee to brew when I brew coffee, so I roughly feel like I know that's about a third, about a third of the amount of coffee that I drink in the morning, which when you brew that thing, I mean, it's significant. This is a extremely non-trivial amount of plastic sitting inside
the brain. And there's a lot of speculation right now which I don't put too much weight into it because, you know, how we have these health scares and like, oh, this is dangerous and that's dangerous or whatever, but seven grams of plastic, seven grams of plastic sitting inside of a brain. I feel like it can't be good unless it's like alien plastic that is like nanobots or something. But no, this is regular plastic. The question is, where is it coming from? How is it getting in there?
Is it in all our foods? Is it in? Is it from drinking bottles? I'm actually super concerned about this because I drink energy drinks. That's why I switched to these because I think they have less. But I saw a crazy report about my favorite not energy drinks, protein drinks, my favorite protein drink. It was the core power one, and it was like off the charts in the amount of plastic, according to this one lab that ran it.
Now who knows, maybe that lab was run by the counter product, which I went out and bought, by the way, which is Muscle Milk. Anyway, I'm concerned about this, but I'm also cautious because you don't know if it's a scare turn. That plastic could be completely inert and doesn't even matter. And the thing that's causing all this cancer and drop in testosterone and all this stuff is actually something else could be our sun exposure, which I think is a later. Yeah, it's two stories down. So anyway,
lots of plastic in our brains. Not sure what that means yet. Gallup has released its 2023 Global Emotions Report, which measures the world's emotional temperature through Positive Experience Index. I'm opening this one because it is cool. Look at this. Experienced anger. Look at this. You got these country breakdowns. This is really cool. Then you got map views. And look at this. The map view. You can click on sadness stress worry pain. Look at that enjoyment. Okay. So
dark is better right? Um. China. Yes. Yeah. Dark. Yep. Okay, so let's look at a light one. Yes, 100% dark is better because that's Afghanistan. Two thirds. No. Oh, man, that's so depressing. Afghanistan has two thirds. No enjoyment. However, the question was asked another one with that similar color turkey. I'm pretty sure that's turkey. Yeah that's Turkey. Uh, Ukraine. About half, for obvious reasons. What's this one? Morocco half. What's this one? Tunisia half. Roughly super happy over here.
Somalia Why is Somalia so happy? See, I love these visualizations. Uzbekistan. Very happy. What have we got over here? Norway is happy. Keep forgetting. Norway is the far left one. What is this one? Over here. Estonia. Very happy. Iceland could have guessed that one. Mexico! Kicking total ass over here. Ireland seems. Yeah. Really happy. UK? Not so much. What are these? Over here. Indonesia. Indonesia. Very happy. Malaysia. Very happy anyway. Really cool stuff. Russian Federation.
Pretty low score, actually. I think this is the actual worst one. Afghanistan, which is totally explainable. I mean, so this is one experience I've been having is I take a decent number of Ubers and I always talk to the person in the Bay area. My chances of getting someone from Afghanistan are like 80, 90%, and I talk to every single one of them for the entire duration of the trip. Usually they are interpreters, don't say translator,
they are interpreters. And oftentimes they worked for the US government, which is how they got their visa to come over here and what they have back home if they brought their family. Most likely they didn't bring most of their family. Their family is in danger because they are here and it is going from bad to worse every single week. It just is so, so depressing. Anyway, I'm on a tangent here, but talk to your Uber drivers. Experience life
through other people's eyes. Okay. Um. Data from surveys conducted in 142 countries. Mix of telephone, face to face, and some web stuff. About a thousand respondents per country. So that's not great, but I'm sure they're doing it scientifically, so it's a decent sample. Non-smokers who avoided the sun had a life expectancy similar to smokers who got the most sun. And this is nearly 30,000 Swedish women over
20 years, by the way, Scandinavia likes to smoke. And Germany, this is the worst thing about Europe and there are many things competing for that right now. But one of the worst things about Europe is smoking. I go there and I'm just like, what is going on? Yeah, Switzerland. I'm about to go back to Switzerland and everyone's going to be smoking inside the restaurant. They're smoking inside the restaurant. I'm just like, what is? I thought you guys were
the advanced group. Okay, the research suggests that avoiding sun is as risky as smoking. So this needs more research, obviously. But like I said, damn. For me, I get sun in the morning. I got decent amount of sun this morning. Massive boost for me. I put on the Waking Up app, listen to Sam do a ten minute thing. I do a ten minute walk out there, I do some breathing. That's how I start my day. When I'm on a routine, which I should be and often not enough. But I
did today and yesterday and the day before. All right. Stanford researchers have found that blocking the whatever blocking this pathway in the brain can reverse the metabolic disruptions caused by Alzheimer's disease, providing cognitive improving cognitive functions in mice. I'm starting to feel like we're about to make massive progress on both Alzheimer's and cancer. And honestly, it's making me want to invest in like, the top three drug companies I think I'm already in, which one I'm in,
the one that does wegovy. I can't remember the name of that one. I'll think of it. It's right there. It's right there. Anyway, I'm in that one, but I'm not in the Lilly one. I think they're called Eli Lilly, but they're rebranding to Lilly. So I figure if I get into the two top competitors, one or both of them is going to do something with Alzheimer's and cancer, and we're already doing it with obesity. Like, that's a trifecta. All we need now is like balding and aging. Aging
is the big one. Aging and cancer I would say, are the big ones. And then like obesity and hair loss, that's amazing. So not investment advice, but I'm damn sure getting in all right. Air purifiers and two Helsinki daycare centers reduced sick kids days by 30%. And I don't think this is Covid or flu or anything. I think they're just talking about like, all cause based on the the study parts that I saw, University of Missouri scientists have developed a liquid based solution that removes over 98%
of nanoplastics from water. It uses water repelling solvents to absorb the particles, which are easily separated and removed. I assume you just run it through a filter and those things will be big globs, and they get stopped behind by the filter. I expect to see lots more of this. Can't wait for the Huberman episode on this, because I wanted to be able to do this like cheaply at home.
Eli Lilly's weight loss drug Tripeptide, found in Zepp bound and Manjaro, reduced the risk of developing type two diabetes by 94% in obese or overweight adults with pre-diabetes, 94%. And Apple Podcasts is losing ground to YouTube and Spotify, so recent study put YouTube at 31%, Spotify at 21%, and Apple Podcasts at 12. I don't do Spotify, I do YouTube and Apple Podcasts. But all right. Ideas thought
of a cool idea for fabric, Telos and substrate. Maintain a list of everything I've been really, really wrong about, which I'm already building that list, and then write a fabric pattern that looks at that list. By the way, I'm just going to have it all in my same Telos file as I talked about in the augmented course, but basically just have this list listed as all my
biggest mistakes, cognitive errors or whatever listed in there. Then I also have another section inside that same telos file, which is like all my current beliefs, my model of the world. And then the fabric pattern basically says evaluate all my current beliefs, look at all the previous mistakes that I've made and look for patterns. Look for what in my current beliefs might be broken in a similar way as the way I was wrong about the other things. First of all, it's going to help me diagnose why
I was wrong. Is it because I have a bias towards this one thing? It's like, oh, you're so pro I that you are wrong about this thing because you thought technology was a solution. So turns out your bias is thinking that tech can solve too many things in the world. Good to know. Like I'm already actively defending against that bias because I know it's there doesn't mean I'm properly or adequately defending against that bias. Point is,
I want to see the bias. I want to see it call me out on it, and look for other evidence that my current beliefs are broken because of that problem. Discovery Foof I uses fluff and I yeah, that's why it's called that. To find more web hacking targets by Joseph Thacker, go fuzzy recursively looks at JavaScript files and finds endpoints that can be tested. Analyze interviewer techniques is a new fabric pattern that will capture the je NE
sais quoi. By the way, this is spelled wrong. I don't know why I didn't just use AI and spell this correctly. I always spell this wrong. N e is n a I s and I think the other one might even be wrong too, but I think the quote is right. But anyway, it's n I s I believe is how you spell that. Anyway, someone French gave me the proper spelling again, more I should be part of this. Um, back to my bias of I being the problem, but I've been using this on Dwarkesh and Tyler Cowen content
analyze interviewer techniques. It basically finds out, it figures out and tells you why they're such a good interviewer. Harness is a quick tool I put together to test the efficacy of one prompt versus another. It runs both against an input and then scores the output according to a third objective prompt that rates how well they followed the plot and actually executed the instructions of what the prompts were trying to do in the first place. So super useful.
Stayed in time are the same thing by Hillel Wayne. Don't Force yourself to become a bug bounty Hunter by Sam Curry 67 years of RadioShack catalogs have been scanned and are now available online. MD RSS is a go based tool that converts markdown files to RSS feeds. You can write articles in a local folder, and it automatically formats them into an RSS compliant XML file. Super cool. No hello, no quick call, no meetings without an agenda.
You already know that's good. Roger Penrose's book, The Emperor's New Mind explores the relationship between the human mind and computers, arguing that human consciousness cannot be replicated by machines. I have the opposite view, which is why I'm going to read this book collection of free public APIs that are tested daily and the recommendation of the week. Take the time to read this week's main essay. We've been lied to about work, but more than just read it. Think
about what it means. If I am right, think about what that means for you and your career, but also all the young people you know and care about. So I didn't talk about the solution in the piece, but it's essentially human 3.0, and I'm going to be talking a lot more about that, but start thinking about it now. Definitely recommend that you read this piece. And the aphorism of the week to fear love is to fear life, and those who fear life are already three parts dead
to fear. Love is to fear life, and those who fear life are already three parts dead. Bertrand Russell. Unsupervised learning is produced and edited by Daniel Miessler on a Neumann U87 AI microphone using Hindenburg. Intro and outro music is by zombie with a Y. And to get the text and links from this episode, sign up for the newsletter version of the show at Daniel miessler.com/newsletter. We'll see you next time.