UL NO. 436: Thoughts on the Future of AI & Societal Stability - podcast episode cover

UL NO. 436: Thoughts on the Future of AI & Societal Stability

Jun 14, 202454 minEp. 436
--:--
--:--
Listen in podcast apps:

Episode description

When SuperIntelligence? Apple's WWDC updates, new Fabric pattern, GPT-4 Hacking Paper, China/Russia Using OpenAI for Misinformation, and more…


➡ Check out Kolide:
kolide.com/unsupervisedlearning

Subscribe to the newsletter at:
https://danielmiessler.com/subscribe

Join the UL community at:
https://danielmiessler.com/upgrade

Follow on X:
https://twitter.com/danielmiessler

Follow on LinkedIn:
https://www.linkedin.com/in/danielmiessler

See you in the next one!

Become a Member: https://danielmiessler.com/upgrade

See omnystudio.com/listener for privacy information.

Transcript

S1

What do you call an endpoint security product that works perfectly but makes users miserable a failure? The old approach to endpoint security is to lock down employee devices and roll out changes through forced restarts, but it just doesn't work. It is miserable because they've got a mountain of support tickets. Employees start using personal devices just to get their work done, and executives opt out the first time it makes them

late for a meeting. You can have a successful security implementation unless you work with end users, and that's where Kaleid comes in. Their user first device trust solution notifies users as soon as it detects an issue on their device, and teaches them how to solve it without needing help from it. That way, untrusted devices are blocked from authenticating, but users don't stay blocked. Kaleidos is designed for companies with Okta, and it works on macOS, windows, Linux, and

mobile devices. So if you have Okta and you're looking for a device trust solution that respects your team, visit collider.com. Unsupervised learning to watch a demo and see how it works. That's collider.com unsupervised learning. Welcome to Unsupervised Learning, a security, AI, and meaning focused podcast that looks at how best to thrive as humans in a post AI world. It combines original ideas, analysis, and mental models to bring not just the news, but why it matters and how to respond.

All right. Welcome to unsupervised learning. This is Daniel Meisler. All right. 436. In about a bunch of AI stuff here. So, uh, big religious day for me on Monday. It was, uh, WWDC by Apple. And what I said after it was Apple, just one. I they caught up and passed everyone basically in one shot. And I think this is because the thing that they get that most people don't is it it's all about the platform. It's all about the integrations.

It's all about the ecosystem. And they're essentially the only one who understands that and is also good at tech. And they also have this massive platform in the iPhone. Right. So it's just they're just winning. They're so far ahead of everyone in terms of long term thinking. Like it's not even close now. This one has since been cleared up, but at the time it wasn't clear what part of what they were doing was actually Apple stuff. And what

part was OpenAI. I feel like they could have done a better job explaining that because I was paying attention, and I still kind of didn't get it. Looking back, it was pretty clear somehow I missed it. And I, I mean, I used to work there, I've worked on AI quite a bit and I like, I love AI and I love Apple, and I still didn't get it. So I feel like they could have been more clear. So the answer to this question is that most of what they showed was actually all Apple stuff. It was

not pure OpenAI. And that that's a huge difference, right? So so essentially what the situation is, is, I would say probably 90% of what they showed. I don't know, you can figure it out for yourself. Like if you did the the math or whatever, it doesn't really matter. The point is like 90% roughly of everything they talked about on device, the cloud stuff, the whole new cloud infrastructure,

which is absolutely insane. They basically built a new AI cloud infrastructure that's completely secure, where, you know, they don't even have access to the content that's on there. They don't give that access to any third parties. It's essentially like a cloud version of the local, very secure enclave. I'm not sure it's quite that level, but it's pretty much that level. It is better than anything that exists in the industry right now in terms of like secure

cloud compute, and it's just extraordinary. And they generated it just so that they could do these AI features. So that is essentially their AI story is they're going to have deep context about you because they're Apple. And this is essentially life OS is what we're talking about. And so they have all that information about you and they

have the ability to bring that to AI. And what they've done is they've made it so that if you could do it locally, it does it locally, and if you can't do it locally, it goes up to the cloud using AI models by Apple. So these are Apple models running on Apple silicon right. It's not even it's not even Nvidia stuff. They're using their own chips also in the cloud and obviously on device. So that's the whole ecosystem. Now that is for all the context data okay.

That's for basically all of the personal data that you have, which they don't want to share with other people and they want to be very, very private. Now in the case of like looking things up like, oh, get the current weather and stuff like that. That's where they partnered with the new Siri, with the new with OpenAI. Right. So that's that's what they did. So essentially what they did was they said, okay for live searches, OpenAI is really good at that. You know, fact checking, stuff like that.

Just just calling the internet. And maybe they're not great at fact checking, but you know what I mean? So they're leveraging third party AI for a very specific thing, but they're not like uploading all this stuff to OpenAI. It's like the exact opposite of what they're doing. They built this whole new infrastructure just to keep it all very secure and very private. So it's very much two different things. Apple AI, which is like 90% of what they talked about, which has the on prem and the

cloud component. And then this other third party integration, which is with ChatGPT. And they even talked about yeah, they're going to do like they're looking at possibly doing Gemini. But the point is other third party services for those kind of integrations. But don't confuse it. That is not what they're doing with our personal data. Like so Live Bovary. A whole bunch of people were like, well, how do I find an android? Like, I can't believe they're sending

all my data to OpenAI. And it's like, that is not what they said. It's not what they said. It's like the opposite of what they said. So, um, already running the demos or the betas for iOS 18, uh, remarkably stable, but largely because they didn't really change that much yet, and they definitely haven't added like the AI services yet. So like, Siri is still the same. That stuff is coming out like later this year, they said.

So I feel like I didn't really get the AI features that I was looking for immediately in the beta, which I was hoping to get. But whatever it happens when it happens went and trained kickboxing, I was. Bad at it. It was quite sad. I was like five times worse than a beginner, I think, because it's hard for me to like, do things slowly and I'm like imagining things in my brain, but my body won't do that thing that's in my brain. I'm just very sort

of confused. So I think I'm going to do some more, like learning the basics of the body coordination, like on a bag, and then do it with live people after that. That's my current plan. Plus I'm going to be doing a Gits as well. And uh, one of the most important conversations ever on AI safety. Absolutely. Got to watch this thing. It's, uh, Leopold Aschenbrenner and Dwarkesh Patel. I've talked about this a million times, but it is really,

really good. Wrote a new fabric pattern called Capture Thinkers Work. So you basically send in, like Hayek or somebody into, you know, fabric SP capture thinkers work. And I mean, it's extraordinary. In fact, I can show you, okay, so if we do a capture thinkers work here, let's do this. Let's first let's blow it up some and then we'll do who's a philosopher? Hume. We'll send that to capture thinkers work. So comes up with a single line sentence

up here. Most impactful ideas primary advice or teachings. The works their main works. So you can go build a reading list based on this top quotes. Something is human if it emphasizes the role of experienced custom and sentiment over abstract reason and innate ideas, and like advice of how to like implement human concepts into your life. So yeah, love this pattern. It's a great way of like thinking

about someone's work. So for example, if you do like S or Perl, which it doesn't matter if you spell it wrong, for example, it looks like maybe I spelled it wrong because it corrected it to one R and one L, but yeah. Background where they were born, the school psychotherapy just really, really powerful. Got a thread here on how to fight a good mentor. I, I put this out a very long time ago, but I think it's worth sharing. I'm going to go ahead and open

it up. So it's like yeah, being mentored by someone ahead of you can really change your life. Don't overuse flattery. Ask something very specific. Behave like a future peer like don't. Don't grovel. Basically, show your you've already done a bunch of work and, you know, don't ask basic questions that you could have got by reading their blog or their book. I mean, to some extent you can, but you don't want to overdo that and ask for an opinion on

something you've created. So it's like, look, I've done work, I've created something. I'm curious about a very specific question about what you think about this thing you don't want to ask open ended things. Kind of a general rule here is like, don't give them work. You're you're being respectful of their time by asking a pretty small thing and sort of like presenting yourself as a peer and you're like, hey, look, I'm looking for feedback on this. I know you've done it a lot, and that's kind

of the vibe. Or you can offer an improvement or construction a constructive like comment or feedback on something they've done and most importantly, like produce something really cool and show it to them. And then they're likely to just like, treat you like a future peer and interact with you. And that's the summary here. So and it got a blog post around it okay. Let's see. Yeah. Adding source names to stories now which we'll see here in a second.

So basically you'll know what the source is before you click it. New member piece on the future of AI okay. Yeah I'm going to go into this one. This one's pretty major okay. Let's dive into this. All right. So first thing is like my definition of AGI and ASI. So my definition of AGI is can functionally replace an average white collar worker in the US making 80 K in 2023. And I say 2023 because it's going to get weird. This definition will get more weird if you

don't lock it into a previous time before. Like AGI. We're basically saying like, look, a white collar worker making an average salary, right? Like 80 K in the US in 2023. And ASI, I'm defining as an AGI that's smarter and more capable than any human that's ever lived.

And what what this is essentially saying is like, if you can functionally replace an average white collar worker in the US, what that is referring to is generality, because what I'm talking about is the ability for the boss to say, you know what, you're actually not on that project anymore. Go get with Julie and Lisa, and you're being completely retasked to this other project, which is kind

of unrelated to that. And there is some issue of like specialization, but we're assuming it's still within the specialization of that person, and it's general enough where you could be like, look, it's a completely different task. It's a completely different project. It's got a different time frame, it's got different goals, it's got different metrics. And what a human does is basically say, okay, cool, I will erase

what I was thinking before. I will take the new requirements, I will come up with a new plan and I will start working on it, which means you got to break it into tasks. You got to have a timeline, you got to have a schedule, you got to have all that. So in other words, it's not just narrowly executing small tasks, it's also the planning stage. It's also constant readjustment, constant evolution of like how do I best do this. And obviously there's different tiers of quality of

people who can do that as well. But that's why I say average white collar worker making $80,000. So it's not like exceptional. And it's not like the worst. So so given that as a definition of AGI, this general intelligence that can like be retasked as well as do specific tasks underneath plus answer emails plus be, you know, communicative and, you know, basically function as a good employee.

That's essentially AGI. So given that what who is able to do that and basically be a thinker and be a producer of new ideas, basically smarter and more capable than any human that's ever lived. And a good way of thinking about this is like and Aschenbrenner talked about this as well in the Türkische caste. So while using their first names, Leopold talked about that in the Dwarkesh podcast conversation. It's essentially like, what if you could just

make like 10,000 Einsteins or John von Neumann, right? That would be insane. So just imagine John von Neumann here, or Einstein or whoever you want to put as like

the smartest person. And just imagine you can have a whole bunch of them who could go and do these tasks, like AGI tasks like, you know, most importantly, come up with new science, come up with new innovations, come up with new medicines, but also, yeah, go do all this work, figure out how how to do it better figure out better ways to do all the work we have to do and then go do it. So that's ASI. And I'm saying essentially there's a big divide here between conscious

and unconscious. And it actually doesn't matter all that much. A lot of people think that consciousness is on this path over here. So it's like, oh, we're going to get artificial narrow intelligence and then we're going to get AGI. And then after that it becomes conscious and then it becomes super intelligent or some combination thereof. But I think the better way to think about this is that we could stay completely over here and not even get consciousness,

but still have artificial super intelligence, right? So one does not require the other. And yeah, I've got prediction for a date later on. We are now inside the post itself. Yeah. So how I see the paths of development and the thing that got me talking about this, or thinking about this a lot more, is just this whole Leopold and Dwarkesh thing. Yeah. So I talked about that path and that's the conversation. Yeah. Okay. So this is basically my argument of like how we're going to get to AGI

and ASI. So I think this is a hypothesis right. It's it's an idea. It's a thought only so many components to understand the world. Some of these are easily attainable and some are like impossible. So an impossible one is like you can't calculate all the combinations of things like numbers get crazy very quickly with combinations. So unless you can brute force everything like like God or something, it's really hard. But I think there are really realistic

and approachable abstractions of that complexity. So my intuition is that there are likely abstractions of those that we haven't yet attained. I think that's pretty obvious. And then an analogy I use here is like fluid dynamics representing all the molecules in a liquid, or Newtonian physics being an abstraction for like quantum or whatever the bottom layer is. So my friend's argument is I was walking with him

recently and like, I think it's really smart. He basically said there's limitations to what we can do, even with super intelligence, because of the one way nature of calculation. So basically P and P, you could try lots of things, but it's a one way function. You can see if it works, but you can't like immediately see all the options.

And that's kind of like P and P. So he came up with a great definition of intelligence, which I've since seen like other people talking about as well, which is like the ability to quickly reduce a search space for a problem, like like pruning trees. So there's like 100 trillion options. What can we do to get that down to like 10 or 100 or 1000 or whatever as fast as possible. And the faster you could do that, the more you could do that, the more intelligent you are.

And actually reminds me a lot of quantum computing for the first time. I've never seen like an intersection there, but imagine that you could test multiple options all at once, which is kind of the whole thing with quantum anyway, so here are the components I see leading to superintelligence. One is a super complex model of the world, like how the cell works, how medicines. Interact with cells, you know,

molecular interaction with different body parts. So it's essentially like testing drugs, but doing it with models instead of like having to actually try it in labs and people and mice. So think of that at like all levels of physics. So like quantum atomic molecules, cells and bigger and bigger, eventually you're at like psychology, sociology, whatever. And in some

cases it's just feeding our current understanding as text. But even better, if you like, have actual recordings of these things happening, similar to how Tesla taught its new full Self-Driving. It's just like, yeah, this is here. Here are the actual interactions. Learn from that. So the use case I kept using for this was solving human aging. So let's say it's like the telomeres or whatever the length of

the telomeres. And there's a whole bunch of different possible mechanisms of how aging could be happening, and therefore how we might be able to turn it off or reverse it or whatever. But the most important thing is like have a deep understanding of how everything works, so you know where you could actually start to go and look. So the next big thing for this, in my opinion, is the patterns that analogies and metaphors. So links between things.

And we're already seeing like models get really good at that. And I think the next one is the size of the working memory. Right. So not exactly sure what this means, but it's like it's it's like your Ram or your functional memory. It's like, how much of this map can you fit in your brain at once without having to slide it over? And parts of the map fall off the table and you can't see them. So maybe that's

a combination of multiple tiers. So you have like you have like Ram, then you have like L1, L2 inside of the CPU, which is really the brain, not really memory but kind of all the same. Right. And then you've got disk which is much slower, which has to go into memory to get loaded up. And when there's too much on disk and not enough Ram, like I said, you got to swap. Right. So these are the types of things that I think the overall size of that memory that can do the patterns based on the world

model is essentially what gets us there. That's my theory. And essentially this theory is that with scale inside of neural networks, those three things just keep getting better and better, plus some post-processing which we'll talk about, and a whole bunch of hacks and tricks that are basically just going to come with the industry, because my friend pointed out that just the size of the model enough alone is not going to be enough, and that Post-training is super critical.

And once I got what he was saying, like, we definitely agree on that now, it was just something that I wasn't aware of and that got us kind of in line. So this is my way of capturing this. So my hypothesis is that there are very few fundamental components that allow AI to scale up from narrow AI

to AGI to ASI. So it's a world model, sufficient training examples to allow deep enough ability to find patterns and similarities between phenomenon within that world model, sufficient working memory, and then ultimately the ability to like, model the scientific method and test things. Ideally not having to actually do that test, but being able to test things in an abstraction. Right. And I'm not sure how smart or how good that is, because if you're not testing, you're not really testing. I

think this might be possible. And yeah, like I said, I'm least confident that number four is required or that it's possible. However, I would say that these three are amazing and possibly enough. And this one, if it were possible, to whatever degree, it would massively enhance the other three. And I think these are the main components. I mean, I've spent relatively little time thinking about this. I could

revise it in the future. The other thing to say about this is that number four, which is the testing piece, might just be implied somehow if number one is sufficiently advanced, if the world model is good enough, maybe we start getting to this, I don't know. And maybe it requires the patterns and similarities. And obviously I think a lot of this requires the memory. So I guess what I'm saying is the whole thing might be two big things the complexity of the world model and the size of

the active memory that it's able to use. And maybe the patterns and the testing stuff might come along with

those two. And I guess the most controversial thing is that, and this kind of undergirds everything I'm saying is the hypothesis that the universe has like an actual complexity that is approachable or finite, and that models only need to cross a certain threshold of depth of understanding or quality of abstractions or depth of abstractions or height of abstractions to become very similar to like a Laplacian demon in the sense of like full knowledge. So I don't think

they can have full knowledge. I think that's God level stuff that's supernatural. At least according to our current understanding and my belief. But that if you hit a certain threshold or depth of abstraction quality, it's functionally very similar. That is what I'm basing my intuition, my hypothesis of AGI and ASI on. I got another concept called human 3.0. So it's essentially about how many people believe that they

have ideas that are useful to the world. How many people believe they could write a short book, percentage of a person's creativity that they're actively using in their life, percentage of a person's total capabilities that they actually broadcast the world as some sort of value, and the degree to which someone lives as their full spectrum self. So in human 2.0, we're essentially living and working for capitalism. Capitalism is like the most important thing. So we're kind

of like 10%. We're putting like 10% of our value in what we can offer to the world, I would argue in our CV because it's like, oh, I'm really good with Microsoft Excel, and I can send emails or whatever. And it's like, is that really you? I don't think so.

The other aspect is that human 2.0, I would say current model is essentially most people believing that they're not special, most people believing that only special people write books, write blogs, give presentations, do things like that, and regular people don't. So two main things bifurcated lives broken into personal and professional.

This is 2.0, and the percentage of people who think that they have something to offer the world, which I would argue is extremely low and human 3.0 is the transition to a world where people understand that everyone has something to offer to the world, and they live as full spectrum people. So it's like living for other humans, living as humans, for humans, as opposed to living as humans who like it's like living to work instead of working to live and wouldn't even be working to live.

Human 3.0 would even be more advanced than that. It would be. So it means like your public profile is everything about you, right? How caring you are, how smart you are, how funny you are, your favorite things in life, your best ideas, the projects you like to work on, the problems you think are important and meaningful in the world, plus your technical skills, of course. And they know that they're valuable because they are all these things, not just

what they can do for a corporation. And so my thoughts on this are like, because you have to start wondering, when you listen to the Leopold and Aakash conversation, it's like, is human 3.0 in danger? Are we in danger of losing a world where humans exist and do what they do for other humans? Because AC is going to take over and whatever turn us into whatever. Maybe, hopefully? Well, I'm pushing for a possible AGI or a that allows

us to do human 3.0 either. It would be kind of scary if it just took over and implemented it. That would be scary. Hopefully it would arrive at something like human 3.0 and keep us alive and thriving. Or it doesn't take over. It's still used as a tool, but it's in the hands of someone somewhat benign like the US, hopefully remains benign. And at that point we we basically give it the criteria of help us maintain a society in which we move towards and maintain something

like human 3.0. Another thing I'm really worried about is like how immigration will affect UBI. So if the government is basically the jobs are gone, no, very few people can do something that I can't. And that's like, whatever 10% of the world or 1% of the world, or 25% of the world, whatever. Different times, different numbers, different ways of measuring. But let's say it's 10% and the other 90% just aren't employable because, like Harari said, you know, they're

a useless class. They have nothing that they can offer the economy. And this is human 2.0. Still, this is not crossed over into this hopeful land of human 3.0. In the meantime, I mean, because some people, like Leopold are worried about ASI happening in like a number of years and just like completely redoing society and the government's

obviously going to have to be involved there. They're going to have to pay people to basically not riot, not kill themselves and, you know, provide education, provide gaming, provide entertainment, provide something some sort of basis for meaning that's going to have to be included. But what does that look like? I mean, there's like a welfare state and there's like the the government's giving out too much money, and then there's a whole nother level when it's actually UBI, because

no one's even able to work. Right. It's not that they can't find a job when when people are actually hiring, it's like, no. Nobody's hiring. All the work's being done by AI or whatever, 90%, and AI is producing trillions of dollars in value. And that raises another question. Okay, you're making all this new stuff. You have all these new creators, you have all this new AI generated productivity and products and services. For who? Who's buying the stuff?

Who's buying the stuff? If the top 10% are the only ones who can make things and sell things. Who's buying the stuff? And the natural answer looks like, which is quite gross. It's like, I guess the government taxes all the income that's being made. Again, where's the income coming from? It's almost like there's an exchange between the government and the producers, and the producers are making all this awesome stuff. The government is using all the awesome

stuff to generate all these awesome things. But then the government has to take this money, which is very strange word for it now, because it's not being generated by the people and being taxed by the government. In fact, it's like it's just a money idea. The money idea

goes to the government. The government then gives the population money, the ability to purchase these goods and services, which are being generated by the small percentage of the population to the tune of whatever trillions of trillions upon trillions of dollars. And then they give that money out to the people and the people that can then use that to buy the services that are being made by this 10%. And that feels very not quite Ponzi, but like shell game,

it's like, oh, it's fake money. We're moving it around. It just feels very strange when when the, the human population is not producing not participating in that creation. Now in human 3.0, I think they will be I'm not sure exactly what that economy will look like. I'm sure much smarter people can help me think about that. It's essentially more of a human to human exchange barter system. Obviously a currency like a maybe like a doctor, a

coffee type of currency or something. But you can also buy regular goods and services, but you could also pay people for making you happy or whatever. Like a lot of people will be creating art and doing lots of stuff like that. So I'm hoping there's a large economy for sharing on this full spectrum that we talked about with human 3.0, and that will involve money and stuff like that. But what I'm talking about here is really

human 2.0 stuff. This is like this year, next year, the next five years, the next ten years, you know, what do we do in the meantime with all these people who are going to be needing money? I mean, what happens with an economy with millions of immigrants who are undocumented, who have families, they need to feed them, they need to house them. And what if the government is like, well, what if the jobs go away? Well,

maybe they leave. Maybe they don't leave, though, and they're very angry because they can no longer get work, and they're also not being paid like the local citizens. I don't know what the solution here is, but it's I'm saying we need to think about it. So my thoughts on conscious AI are, I think, fairly simple actually. I think the way humans got consciousness is that it was

adaptive for helping us accelerate evolution. So basically winning and losing is what powers, evolution, blame and praise are smaller versions of winning and losing plants and insects win and lose as well, evolutionarily, but it seems like only humans and a few other species. And maybe it's a lot, but it's a subset. Very small subset, have subjective experiences, and it's not clear that anyone else other than humans has like blame and praise. Well, maybe dogs do. I

feel like they understand, blame and praise. Good boy, bad girl, right? I feel like that works based on their facial expressions. Who knows if that's right. But anyway, bottom line is blame and praise might empower evolution if we experience winning and losing, being blamed and praised and feeling that at a visceral level versus having no inter like immediate repercussions of not doing well. So maybe evolution still gets you there if you don't have blame and praise and subjective experience.

But I think this whole kind of container is how we ended up being conscious, because it enables this, which makes evolution better. That that's my guess. If that's right, then consciousness might have simply emerged from evolution as our brains got bigger and we became better at getting better, which means it could emerge naturally through like self similar loops,

definitely some RL related stuff. Or we could kind of just like hack the system and like add it on and be like, hey, develop this so that you can have this advantage or whatever. We can try to shortcut what evolution did over millions of years. Yeah. And then one question is like, well, if we hacked it that way, would it be real consciousness? And me as a materialist, I'm like, well, what is real consciousness like? I think

at some level it's all an illusion anyway. But practically, if you feel yourself feeling and it hurts, that's real, right? It doesn't matter how mechanistic the substrate below actually is, because at some point you can pick along the way and just be like, this is mechanical, this is mechanical. This. A mechanical. Therefore it's all mechanical. Doesn't matter. Friendship still is awesome. Love is awesome. Ice cream is awesome. Doesn't make it any less awesome than it's made out of atoms, right?

So my AC prediction I'm ferm on my AGI prediction of 2025 and 2028. Somewhere in there, 2026 2027 is like ideal, the sweet spot for me. I'm going to give soft prediction, soft prediction of AC 2027 to 31. So what is that. That's three four plus two. So that's roughly 2029. Don't feel great about it. That's like a 30 to 40% prediction like confidence level. This one I'm saying 90% by 2028. End of 2028 I'm saying like 60% by end of 2025. So I feel a

lot more confident about that one. But who knows. Predictions are hard, especially when they're about the future. Niels Bohr said that. All right, unifying thoughts. Leopold, you want to watch him on AI safety? Completely. Absolutely impressive guy. And you want to go read his essays that are released on this as well. Talked to Aakash Patel is becoming one of my favorite podcasters ever right now. He is

basically an autodidact. He's a crazy reader. When Dwarkesh talks to Cowan or Cowan, that's how he pronounces it Tyler Cowan. When they talk, they jump around, they're like, oh yeah, you know, I was I was knitting this thing the other day and I was working. I was, you know, making wood figurines on my lathe in the back backyard. And I was thinking about, you know, whatever, Roman history. And then I was thinking about, you know, Adam's economics. And that reminds me of this one poem by Rilke

or whatever. And they just jump from here to there and they're like, oh, that reminds me of this. Hey, what about World War Two? And this thing happened and they're just jumping from place to place and like, I feel like I'm pretty good at that. I feel like that's one of my superpowers. But Cowan, seeing Cowan do this, it is insane. And Dwarkesh is like a mini Cowan. He's like coming up so fast. It's the reason that

his interviews are so good. He's, like, fiercely curious and an autodidact, reading, like, whatever, 12 books a second, whatever his current metric is, it's insane. Anyway. Follow them. It's really,

really good stuff. And human 3.0. My best optimism here is we get AGI, but we don't get AC for a while, which will give us a chance to hopefully move towards 3.0 or we get AC, but it's like controlled by the US and or benign US actors and we move towards human 3.0 or AC comes online, it's benign and it kind of takes over and it builds human 3.0 because that's the best future for humanity or something that looks similar. Right. And this is what I

was saying before. I don't actually care how unlikely these scenarios are, because it's kind of the only good path that I could see happening from all this AI stuff is we eventually get to human 3.0. A human world for humans, and I would say future versions of humans, which might be hybrids with AI or whatever. But you got to maintain the humanity. We got to be doing what we're doing for humans, not for the sake of tech. So I'm optimistic about that. I am trying to help

build that. That's my whole purpose in, like doing all of this so that that's what I'm shooting for. And I don't care if it's a 20% chance of us getting there. First of all, nobody can know what this number is, so it doesn't really matter. But second of all, if it's a 10% chance, guess what? I'm shooting for the 10%. I'm trying to make it 11. Try to make it a 50, whatever. Try to make it a 90. If it's a 60% chance and it's a 40% chance, we're all going to die. Whatever. I'm not going to

sit around thinking about the 40%. I mean, that's my vibe. I am shooting for human 3.0. And if someone's like, yeah, well, you don't know, China can take it over and blah blah blah, AC and it can kill us all and whatever paper clips and I just don't see the value of going hard down the path of like, the doomers because let's say they're right. Their answer is turn it off, don't build it. That's not going to happen. Next answer. They're like, I don't have a next answer. We should

stop talking about this. We should stop building it. You have exited yourself from the adult table. You are no longer in the conversation because of Moloch. We are building this. It is happening. We are running full speed with scissors in both hands. And there's nothing we can do to stop that. My argument is, the only thing we could do is try to aim it in a good direction. Try to point towards human 3.0 and that's what I'm trying to do. All right, huge diversion, but worth talking about.

That was the AI piece collection of thoughts and predictions about AI. June of 2024. All right. Security got a whole bunch of misinformation sneaking into like, little man stuffed news. So this is like manosphere, like podcasts or whatever, or news sites or whatever. They're actually republishing Russian propaganda network content. So it's like, yeah, it can seep in and go lots of different places. This is why you actually need

AI to be watching everything. And basically when a new information or disinformation or misinformation propaganda campaign comes out, when it comes out, we should categorize, okay, what the core concept is, the core idea. It's trying to spread with AI, and then you monitor all the different stories coming out. It might be like Saskatchewan news from South Parish, you know, Wisconsin or whatever. And they're spreading the same story, because what the attackers will do is they'll figure out, okay,

where can we get it in? Where can we push this narrative? And maybe, maybe the front door is closed, but maybe there's a thousand different backdoors. That was that one. TikTok had an issue with DMs. There was a zero day in DMs where you click it and open it. It's pretty bad. The current tactic for what? You basically get compromised for compromise of your TikTok. If you did that, and a new tactic TikTok is using to avoid the lawsuit is basically to say they're going to build out

a separate algorithm and separate data just for us. So we'll see if that works. New paper claims that GPT four can autonomously hack zero day vulnerabilities. 53% success rate going to be a lot of these papers. A lot of these papers are not very good. A lot of them are decent. I will keep reading and posting these when the claims are interesting and they're from decent outfits, but you want to watch the quality of the paper very closely. Look, especially for how much of their testing

methodology that they publish. Did they show exactly what they tried against these different things? Did they show all the different examples? Did they, you know, give you all the stuff to do the replication? If not, then they're kind of like hiding the ball and you can't really trust their score. And we got to analyze paper fabric pattern that you can use for this. And when I ran it against this one it got a six on rigor benchmark and Ablations look good but some details missing and

stats lacking. That's why that's why we have fabric. Okay Chinese drone photographer got snagged by the US Espionage Act taking pics of military shipyards. So there's another thing that's being talked about right now with China buying up a whole bunch of land right next to military bases, they're buying like massive amounts of farmland, and they start setting up all this gear. It's like, oh, don't pay any attention to us. We're just building some stuff over here.

So I hope Biden or Trump or whoever just like, finds a list of these. And just like search and seizure, good bye. Do not pass go. And maybe some of those are benign, right. But I have to basically be like, well, based on the, you know, 9566, you know, fucking billion times that you've actually tried to steal from us. And actually hacked us, then you know you're going to lose some for the home team when you're actually doing something benign and actually just setting up a farm next to

a military base on accident. And they didn't actually mean anything by that. Too bad you ruined your reputation. Therefore get out. Okay. OpenAI revealed that its AI tech was used by Russia and China for sneaky influence ops. Yeah, they've said this a few times and I like it. I like the transparency. They're basically like, look, this keeps happening. We keep catching it and we're going to try to do better. All right. WWDC keynote got an analysis here.

This is a fabric analysis from a TechCrunch article. As I talked about before, Apple basically killed the AI thing. They also did a bunch of really cool stuff. One of the things is you bring your phone next to someone else, you can transfer them cash. I've been waiting that for that forever. Problem is, a lot of people won't have that feature turned on, or they'll have an Android and you won't be able to do it like so. A lot of people are switching to like Zelle or Venmo.

I don't like Venmo, and Zelle is slow. Whatever. I wish more people had the ability to do these swipey things and Apple things, but kind of rude to require that everyone has an iPhone, right? I mean, that's just not realistic and also not cool. All right. All the different startups that Apple killed today always fun. After WWDC or an OpenAI event, US needs 225,000 more cybersecurity workers. I never trust numbers like this because it's hard to know. Like,

what are they asking? Are they asking it right? What are the questions that they accept? Like, I just feel like the numbers are bad. I feel like that number is going to go down because of AI, and I feel like it's going to go up because of AI. So it's going to go down because of AI, because the jobs are going to get easier to do without people, but so much more stuff is going to get made that we're just going to be injecting insecurity into the world,

especially when it's like getting made with AI. So it's like AI going in, building a bunch of stuff and have that stuff's going to be really nasty in terms of security vulnerabilities. So it's like still going to need people for a while. Like assuming all this AI replacement stuff goes really, really fast. We're still talking about years and years. I mean, super fast for like an 80% replacement of all jobs, super fast. And these are crazy

numbers I'm just throwing out. But but like Nuclear fast is like ten years now as far as starting and actually having major impact, I think it's already starting and it's already like a year or two from now we're going to start feeling it. So I think that's going to be really fast. There's going to be a regulatory response. It's slow to make any change in any organization, and the bigger and more bureaucratic it is, the slower it will be. So just because something could happen in a

year or six months doesn't mean it will. And it's more likely to happen in 2 or 3 or 4 or 10 years. Got to keep that in mind. It's going to go very fast. It's going to go faster than you think and slower than you think both at the same time. Apple just hit the three. $3 trillion mark again. But Nvidia jumped ahead. And now Nvidia is even more valuable than Apple. And Microsoft is still number

one insane. Such as killing it over there. Cartwheel tool olama CLE wire JS for us for UI design shade map project that turns every mountain building and tree shadow for any date and time. How to think like a computer scientist. Pretty cool. Uh, interactive edition here. Starship just landed its first one. Like all the other ones either blew up on the way out or, uh, blew up on the way in, but he finally landed one very impressive,

I think 5000 tons. So that's what, £20,000? Five times two, £2,000. 2000 times 5000. Holy crap, that's a lot of pounds. World's largest solar farm just went online. Three point gigawatt Marty just went live in China. Okay, here's my question. Why can't we build like ten of these in the US? Okay, we know we need all this power for AI. So I say we go Biden. Trump does this. You basically go to each state and be like, look, I need you to build 3.5GW in your state within five years.

And they're like, yeah, that's impossible. We'd have to hire all these people. And the government's like, whatever. I mean, we have money printers, so we're just going to print all this money, we're going to inject it into the economy. Most importantly, giving it to these training programs will, which will train the people how to build the wind farms and the solar farms, whatever the best renewable energy is for that particular state, like Nevada should build like 100

of these things for for sun. I mean, I fly over it all the time. There's nothing there. In fact, you fly over the entire United States. There's nothing there. It's all empty. And meanwhile the sun just hits it every day. Like nine times out of ten. For every day of the year the sun comes up. In fact, I think it might be higher than 90% when the sun comes up. In fact, it might be 100%. The sun always comes up and it is hitting us with like so much energy. And we just it soaks in,

you know, goes off into the atmosphere. Capture it. Okay, look, we've got people who don't have jobs and don't have meaning in their lives. We know that AI is going to require all this power. So millions of people have no meaning in their lives because they don't have work. We talk about the loss of manufacturing jobs. Go build all these solar panels, go build all these solar plants, have the people working there. Eventually they'll be replaced by

robots in AI. That's fine. That'll take time like we talked about before. In the meantime, let's go build 100 of these, whatever, 3.5GW facilities or ten gigawatt facilities, whatever. Giant solar farms, giant wind farms. And we put millions of people to work, which gives them meaning, and they're building towards a renewable future. So, like, even if I didn't happen whatever, we got renewable energy. Now we can export that energy, we can sell it other places. We

just have it for whatever purpose. And you have to build the pipelines as well, right? You got to get that energy to where it needs to go. So that's like pipes, cables, like, you know, wires, all that stuff, all the infrastructure for moving energy. Where's the downside here? It's win win jobs and and tons of energy and no reliance on external energy. And we get to move on AI faster. So really bothers me when I see China understanding this and building so quickly. And we're like, man,

let's fight with each other instead. Oh, in nuclear, nuclear is the other one. Okay, so USA's although I prefer solar, but someone knows better than me. USA's solar panel manufacturing capacity jumped by 71%. I like it, I like it. Florida and Texas are leading. Let's go. What about Nevada? Where are you at? Florida. Forget about it. It rains all the time and it's sinking. It's not going to work out. I guess Florida's energy generation would have to

be based on, like being flooded or winds. No, you can't do can't do wind farms and hurricanes. That would be a bit rough. All right. Humans. David Brooks argues that progressive energy has shifted from the working class to elite universities, making them more left leaning than ever. Points to protests and progressive opinions. Yeah, it's all bad. Highly recommend you read David Brooks stuff The Road to Character. My favorite one is the Second Mountain. Absolutely unbelievable book. Unbelievable.

And I love his stuff. The New York Times, he's like a centrist kind of conservative, but like, yeah, logical. I like him. Okay. This one turned out to be not great. A couple people. I looked at it again, people don't like me linking to Substack. I'm going to link to Substack. The question isn't is it Substack or is it whatever, New York Times, because all these different

sources can be bad. What I care about is like factually being wrong or like making claims and not backing them up in some sort of in any sort of way, or just being ideologue and just like being hateful or rambling or whatever, like, well, I'm rambling, so not quite the same. But the point is, the best people in the world at thinking and being rigorous and having good ideas are going to Substack, okay, and beehive and all these other platforms and YouTube and TikTok. Even so, it's

not the source that I'm linking to. That's the problem. The quality of the content is the problem because before too long, CNBC and like Fox, first of all, Fox News, I don't link to that usually. I'm sure they have good news sometimes, but you get the point. It's not about the type of source, okay? Blogs are just as authoritative as YouTube as a lot of the media networks at this point. It's more about the individual and the

actual article. I am making these links available so that someone can my link numbers or my click numbers are probably going to go down because they will be like, well, I don't want to read from this guy, or I don't want to read from Hacker News or Financial Times. I hate that place or whatever. Hopefully you don't need it anyway. Hopefully this one sentence is good enough. That's the whole point of doing the summary. All right. New York Times deep dive into devastation of Ukraine. Oh, this

story was absolutely nuts. And yeah, New York Times good stuff. They're staying around at a company more than two years could massively hurt your earning potential. US job openings hit a three year low. Layoffs are surprisingly low, it says CNN. See above for commentary on sources. Viagra improves blood flow and could help to prevent dementia. University of Oxford 49% of independents think Trump should drop out because of his

guilty verdict. I'm making a prediction. I think this effect wears off and Trump will be pulling even with Biden within a month. You know what I'm going to do? Let's go look at 538 538 polls. Wow. Look at the impact. Look at the impact of the guilty verdict. Yeah, it happened right in here somewhere. Oh, look. No impact. And maybe I'm reading this wrong. Maybe this doesn't account for independence. I mean, this is this is a June 12th 10th through 12th result. Echelon insights. I don't know

how big this thing is. Okay, YouGov I know that's respectable. Biden 40, Trump 42. Okay. We got two different echelon ones daily cause what do we got? 4545 tide. So so I read this article and it's like, yeah, 49% of independents think Trump should drop out after his guilty verdict. Are these polls okay? That's one argument. Independents don't respond to polls. And supposedly these independents are going to switch the thing and they're going to be mad at Trump,

and it's going to go for Biden. I don't buy it. If anything, in my opinion, this is just my opinion. This is politics. Like, who knows anything, right? In my opinion, these numbers are soft for Trump. I think Trump is probably winning by another 5 or 10 points right now. That's my guess. And I don't think him being convicted is going to do anything for his numbers if if not,

make it better for him. Just ridiculous. This is why I think about I. I literally do not look at news anymore other other than like I brings me political news and analyses for me and then I have to go like read the story, I have to do my analysis or whatever. But like I do not pay attention to this stuff. I'm trying to build human 3.0 I don't like. I'm just so disillusioned with everything. I'm just like checked out. All right, X is doing adult content.

Good for them. Ernest Hemingway wrote a deeply moving letter to friends who lost their son, saying, those who truly live never really die. I love Maria Popova. The marginalien. Yeah, I liked the old name that she had for that site before. I can't remember what it was. The yellow site. Anyway, one of the coolest life wisdom reads I've come across in a while. This is a really cool one. I'll put this one up. You can never expect honesty from

people who even lie to themselves. You'll be ten times happier if you forgive your parents and stop blaming them for your misery for yourself. From society's advice, most of them have no idea of what they're doing. If you continue to wait for the right time, you'll waste your entire life and nothing will ever happen. It's a massive list of these. They're quite, quite good. You'll lose 99% of your close friends when you start to improve your life. That's bleak. I wouldn't go with 99%, but I get

the point. You don't need 100 self-help books. All you need is action and self-discipline. It's a great list. It's a great list. You should check it out. Ideas and analysis I think I figured out the simplest possible answer for why Trump is still doing so well. Oh my god, I didn't include the link. Oh man, I didn't include the link in here. It's like, oh really? You did great. Thanks for not sharing it. And recommendation for the week

is Tyler Cowen and Aakash Patel. As I talked about in the aphorism of the week, live as if you were to die tomorrow. Learn as if you were to live forever. Live as if you were to die tomorrow. Learn as if you were to live forever. Mahatma Gandhi. Unsupervised Learning is produced and edited by Daniel Meisler on a Neumann U87 AI microphone using Hindenburg. Intro and outro music is by zombie with the why and to get the text and links from this episode, sign up for

the newsletter version of the show at Daniel meisler.com/newsletter. We'll see you next time.

Transcript source: Provided by creator in RSS feed: download file