Welcome to Unsupervised Learning, a security, AI and meaning focused podcast that looks at how best to thrive as humans in a post AI world. It combines original ideas, analysis, and mental models to bring not just the news, but why it matters and how to respond. All right. Welcome to unsupervised Learning. This is Daniel Meisler. I'm going to
jump into this episode this week. So someone asked me last week about the best way to run AI models I think they were specifically talking about based on the question. They were talking about local models. So here's a breakdown. So and this is for AI in general. So I prefer using APIs for any sort of thing that I'm doing over and over. I don't like to use front ends like ChatGPT or Claude's version of ChatGPT. I like to go directly to the API using my own code
for day to day use. I prefer using fabric, which is my own project, and that is because it has specific use cases for dozens, almost 100 different regular human problems that we do every day. So I would recommend using fabric for that. And again, the project is here. And if you look at patterns, these are all the different things that you could do with it. Right. So it's like analyzing text, doing job analysis, creating presentations, finding
flaws in arguments. You know, taking exquisite manual like notes on a four hour video and doing it in 30s, extracting the ideas from a book, pulling out predictions that a person made, like all sorts of stuff you could do. So that is like my number one way of interacting with AI. If you just want to show off what you could do with local models, I would recommend Olama, which is really cool. I'll show you that real quick.
So you just go to models and if you click on Lama three, you can just run this model right here and you just literally copy this. And once you have Al-ummah installed and running, which takes a few seconds, you basically run this and it will download the model and put you into a shell that allows you to talk directly to it. So it's really cool. So let me just show you what that looks like if I am over here. Okay. This one is almost done. I was doing it like the pie is in the oven
type thing. We still got a little bit more time 99%. This one is the 70 B version. So this is a giant model. This is like 40 something gigs and have it running here. You see the command I put in Olama run Al-umma three colon 70 B. All right. So that's all you basically do. And then you get a shell and you are talking to the model completely locally just on your own machine. So that's really cool. Okay.
So if you go to back to here. So that is a way to like demo models and show them off and show what you could potentially do with them. To me it's a little bit gimmicky because I would rather use fabric or the next one I'm going to talk about, which is this one. This is a new tool. It's not new. It's been out for a while, but it's growing in popularity. It's called LM studio. This is
a full Mac application. It also runs on Windows and Linux and essentially what you do is you install this thing and you just type into this, the search bar here, the name of the model you want to get, and it pulls down models from like hugging face and like all sorts of different places, it downloads it for you, loads it for you, which could be really difficult if
you try to do it manually. And then when you click on this icon here, you're actually in a chat interface very similar to ChatGPT, except for it's a local thing and it's working completely local for local models, and it's just really powerful. And there's actually even a comparison, one where you can actually compare different models. But the most important thing about this is that you could try new things. You could try things that just came out
like yesterday or the day before or today. Whereas with Olama this one, if you go to models, it's really just kind of like the major ones. There's only a few here. I mean, I say a few, but it's like a couple of dozen. But if you look at like hugging face, let's look at that number. Yeah 629,000. So it's pretty different versus a couple dozen. So that's
the type of thing you could do. If you're using LM studio, you just have access to way more models to mess around and play with and even compare the models. So I would say this is like what I would recommend for someone who is gooey oriented and wants to do lots of tinkering. And if you want to just be a workhorse and get lots of things done, I would recommend Fabrique, which is a command line interface and also a GUI. Now, like I said before, using other models,
lots of cutting edge models. That's kind of the advantage of LM studio. And you could do that with fabric as well. We're adding the ability to do that. But my current favorite one is this llama 370 B instruct. This one's really cool from Quant Factory. It's really powerful. One I've been messing with some different ones. Lots of different groups put out their modified. Versions of Lama essentially, and this is a version of 70. Be fairly fast,
really high quality. And yeah, just really cool way to interact with stuff. And again, the name of the tool is LM studio. We had a really great book club this week. We talked for like an hour and ten minutes, ten, 15 minutes. And then we pick the next book. And yeah, really thankful to be part of the UL community and doing these book clubs. We do book clubs once a month. We do a weekly or a monthly meetup halfway through
the month where we talk about we share tips. We share like automation, hacking tips like security tools, productivity tools. We talk about life. It's a very sort of intimate and sharing and uplifting sort of group. And you should come join us if that appeals to you. It's just becoming a premium member of unsupervised learning. And I just signed up for my first full body MRI, which is going to be super fun. Not looking forward to being inside of a tube. I guess I'll just meditate to
get through that, but I'm not overly claustrophobic. It just doesn't sound like a good time, especially since the full body one is 60 minutes and RSA is next week, so that's going to be exciting to see everybody. So I found a good reason for college and specifically good colleges. And this is thanks to Paul Graham who I read a bunch of his essays where he talked about college. And I think there was one where he talked about this particular concept. So I just wrote it up in
a separate post because he didn't do that. He basically just mentioned it kind of as an aside. So this is basically the reason you want to go to elite colleges or send your kids to elite colleges is because there's very few things that are more valuable than your kid being surrounded by highly resourced, highly motivated, highly intelligent and highly disciplined people. And that is what you find at the higher and higher tiers of college. All the
way up to like Stanford and Harvard. It's just like their parents were rich, their parents were probably smart, they're probably smart, and they had to be disciplined to get decent grades. So it's like the combination of things just stacks up. And if you look at all these different companies that are being founded, they're usually being founded by companies, by people who went to these schools, even if they
dropped out. So it's like, don't think of it as the education because the education is very similar at like a decent state school compared to like an Ivy. But it's not about the education, it's about the connections. So just kind of an interesting little tip there way to frame education. Oh, I will say that the thing that this makes me think of is how can we get that somewhere else? So let's disambiguate or let's break apart these two things education and the networking. Let's find other
ways to do the networking. So we don't have to have this weird college experience. Now, maybe that's not possible because the college experience is actually what gives you the connection in the networking with all the drinking and all the parties and everything, and then all the just forced time together in these dorms. Maybe that's the magic sauce, but I bet you there is a better way to
do that. You could have organized off sites more like hacking competitions or hackathons or summer camp or something like that, where you're actually forging these connections separately from doing the education piece, which arguably could be done better with YouTube
and online courses combined with like in-person workshops. So let's take the best of education and let's take the best of networking and maybe do them separately, or even mix them, but not be confused about thinking that they're the same. Because that's the problem currently with college. And the result
for most people is worse networking and worse education. Okay, so security MI5 is starting to vet key researchers going to top universities in Britain to make sure that they're not essentially from a country like China, where they're just coming in to steal all the stuff. And I really think the US needs to start doing something like this as well. And speaking of that, Germany just arrested three
people who are suspected Chinese spies for stealing secrets. And this is right after their chancellor was giving crap to the to China publicly basically talking about IP theft. And then they made those arrests. I don't know if that was coordinated or not. After moving forward with a possible ban on TikTok, the US is now looking at DJI drones, basically saying, look, they have so much of the market and they're flying all over the country. Who knows what
they're doing with those pictures. And it's a state, partially state owned Chinese company, which means that China runs it. I mean, they have full control if they want it. Of any company. So yeah, this makes sense. This one is insane. Former athletic director arrested for using AI to mimic a principal's voice, and basically had him saying a bunch of racist and anti-Semitic things in the voice of
that person he was attacking. And then that started a whole investigation, and it turned out it was a deep fake. But yeah, yet another abuse case for deepfakes, character assassination. And I think it's going to be super cool to watch all these different attacks. But I think it's also going to make it harder. This is kind of a sad thing. It's going to make it harder to believe
people when they bring forth evidence. It's like, hey, my boyfriend did this, my boss did this, and you show him a video of it, of them doing it, or you show them a recording, have them listen to a recording and they're like, yeah, I mean, I hear them saying it, but like, it's easy to make anybody say anything and you go ask the actual person. They're like, yeah, I never said that. That was a deepfake. So it's going to take some power out of the hands of,
I think, victims. And so we're going to have to, as a society, sort of figure that out. Nation state attackers have been using two zero days in Cisco firewalls. And this is not the first time that's happened to Cisco. And it's also happening to other firewall providers as well. And US lawmakers gave the okay to extend warrantless surveillance under FISA, section 702, these laws that allow the government to do these things, they kind of get perpetually extended.
I am mostly okay with it. I feel like as long as there's transparency and oversight, then it's probably necessary. But I don't like when they sort of become secret programs and don't have oversight. Then you start worrying about what they can actually do to Americans and change healthcare. Paid 22 million in Bitcoin to ransomware attackers, but the files were already compromised, so they're not sure exactly what that fallout is going to be. And the Air Force
is moving forward with Anduril. I'm guessing that's how you pronounce it. And General Atomics to develop unmanned fighter jets, sidelining giants like Boeing and Lockheed Martin. So Anduril and General Atomics, interesting unmanned fighter jets, just like we've always predicted in sci fi and Detective Fi's external attack surface management solution is now in AWS and that is un. Oh yeah, defend defy versus Detectify. I was going to say I think that was an accident. Technology. Amazon's robot
workforce has expanded to over 750,000 robots. And it says overtaking 100,000 jobs. It's not super clear how they're getting to that number, but whatever. I mean, we all know where this is going, right? And like I say, don't be surprised about this. Expect it. Companies will do what makes sense for margins, not for people. The responsibility is to shareholders. And human employees are like, I hate to say it, they're the worst, right? It they're to be avoided.
You don't want to have to hire somebody if you have a hot dog stand and it works perfectly and it's just you're the sole proprietor of this establishment and you have to hire people, that is not good. Now, of course, if you have to hire because you want
to sell more hot dogs, that's fine. But it's much better if you have some robots that you can buy, and they can make more hot dogs and make more people happy, and you get to share all the stuff, and the robots don't get sick and they don't complain. So that is the way companies see this, and we just have to be ready for that. Voyager one is back. Sharing information like it kind of went silent for a while. We were worried about it. This thing is an interstellar space.
That means it's outside of our solar system. Like iPhones don't perform this well, like nothing performs this well that we have ever made. This thing was launched. What? When I was a kid, I mean, decades ago. And it's still working. I just can't believe this thing. And. Okay, someone at Google senior executive is pushing for speed, basically saying the 25,000 person strong knowledge and information team has to adapt to a new operating reality with tighter project timelines. Yeah,
I wouldn't characterize it as tighter timelines. I would say have vision, right. And like I said, this is progress. But they need a whole new way of thinking about making products at Google. And I think that requires a new leader. Police are now using GPT four powered body cams to turn audio into reports. That's great for transparency, I think. How about this for every stop, you basically publish the transcripts and that goes on an open website.
That would be fantastic. And they could clean it up. For privacy reasons, but you can see the whole interaction. And then you also publish the video and the transcript. That would be community policing, especially in AI, because you can have AI monitoring the city, looking at every transcript of every stop. And you could characterize and actually have, you know, a way to profile like, how is this police department doing, are they doing more community management or
more like predictive enforcement, or are they rude? Are they racist? Whatever. Okay. Apple eight open source LMS pushing the envelope for on device text generation with models up to 3 billion parameters. Apple doing crazy stuff in open source. Cannot wait for iOS 18 and llama three. Llama three is huge. Yeah 800 llama three variations on hugging face insane. It is really good. It is really good. And I think it's getting so good that the people like OpenAI and anthropic,
they need to be afraid. And Google need to be very afraid of this. Because what I like about the open source is it empowers the whole world to compete with OpenAI and Google, right? Because if they're like, look, this is how good it was for us. I mean, the barriers are essentially new techniques and the size of your infrastructure and the quality of the data that you have. Right. So new techniques, the crowd, the group of public researchers,
they can jump ahead really quickly there. It's harder for them to compete with one of these big companies, of course, in the size of the cluster and how long you could train and stuff like that, because that's super expensive. And then the other one is data. It's also hard to compete there, but I think the technique part might be such a big lever that it allows some companies to really catch up and maybe even jump ahead in some cases. And then of course they'll get lapped the
next time the big company moves. But I feel like community AI with open source models will move so often and iterate so often that speed it might. Get very close or even equal with or in some cases surpass the big models. Now, I do think the big models will have bigger leaps and they'll go way ahead, but it could be that open source catches up in the meantime just because of the speed of iteration. And I'm
guessing here, but it's kind of just an intuition. Open voice is crushing voice cloning with the ability to mimic tone, emotion, and even cross-lingual speech. Snowflake just shipped another LM. Simone turns YouTube videos into blog posts US is finally getting
its first high speed rail. Elon wants to turn Tesla cars into a distributed AI compute network, kind of like AWS for AI, but on wheels, and Tesla's Autopilot and self-driving are under scrutiny because there's been some deaths, and I think this is probably going to be the case. But I think over time, autonomous driving is going to be better than human driving because humans get tired, humans get distracted, humans are texting while driving or watching TikTok
while driving, and that's probably worse. Again, this is an empirical question, so we'll have to see the actual data and how it plays out. But that that again, is my intuition. And yeah, example of this. We had someone arrested for vehicular homicide after distraction with his iPhone or yeah, I assume iPhone, but distraction with his phone laid to a led to a fatal collision with a motorcyclist while he was on autopilot, and Volvo brought in a Chinese
EV to the US market. Okay, Stanford found out you could predict political leanings by looking at someone's face and specifically the size of your face. The bigger your face, the more likely you are to be a conservative. Evidently, according to this paper, and the smaller your face, the more likely you are to be a liberal. And I'm trying to figure out like what exactly that means. How do you measure the size of your face? I don't get it, but whatever. I'm sure it's in the paper.
Article claims that 30% of kids 5 to 7 are on TikTok. 5 to 7. No bueno. I am very much in Jonathan Heights camp and you need to go read his book. I think he is touching on some really good stuff. FTC has banned nearly all non-compete agreements. This is great news. I think it's going to spawn even more innovation in AI and tech, and some are worried about IP theft, but if you move, you'll just
steal the content. It's already illegal to steal from companies, so you don't need a law to keep them there so they don't steal. Like again, should be two separate laws, one for leaving and moving and one for stealing. Stealing is already illegal, so let's just stick with that. Supreme court is set to decide if cities can penalize the homeless. And I've got a vibe here. So I've been thinking about this for a while. I think if you're mentally ill, let's get you help or get you housed or whatever.
If you're on drugs, let's get you treated and then see if you're mentally ill after that. If you're not one of those, let's figure out if you actually want to work. And if you do want to work and you're not one of these two, then let's get you help. Let's get you job support, training, all these different things like interview help, right. Try to get you on your feet. And if you're not any one of those, so you're not on drugs, you're not mentally ill, but you don't
want to work and you don't want to participate in society. Well, let's get you somewhere where you're not bothering other people who are trying to be useful. And I think most homeless are actually dealing with one and two, so let's help them. That's our responsibility. And if it's not that situation and a person just really doesn't want to participate in society, they're like a Ted Kaczynski but not violent. Cool. Let's help them go do that somewhere else, you know,
in a peaceful way. All right. Using AI for actual science, innovation and research. So AI is now helping physicists explore the vast possibilities of string theory. So this is great. I mean, think about what Einstein did. Einstein went from there not being a curved space time to just thinking
of curved space time. I think about this all the time because I'm reading a lot of experimental physics, theoretical physics lately, and I really think that with my prompting skills, if I were better at the science, which I'm not, but if I were actually, I could probably just do this with AI without even knowing the science. If I had a big enough sort of resources to work with
in the AI was a little bit further along. What you do is you essentially say, hey, look, here's all the different ways that people have crossed over and had these creative breakthroughs. You describe the science before Einstein, you describe what Einstein did, and you do that for multiple different places inside of science. Right. And what this does is it teaches it what it looks like to be creative. And you say, look, you're looking for more things like this.
Then you feed it the current state of thinking with quantum physics and multiple universes and all these different things. You feed it all the math and everything, and you say, now that you have all the current do the same thing Einstein did, but do it for the current state now. And the idea is it should come up with really weird stuff. It's like, what if we're inside of a fishbowl, inside of a mormon, you know, simulation cluster running on
Linux or whatever? It just starts making up things and then the humans can be like, oh, well, actually, I never thought of that. Let's go mess with that. And then even better, and this is something that Joseph Thacker and I talked about multiple times. It was his idea initially was like, how do you automate not just the having of the idea but the testing. And so we've been working on like an infrastructure that you could potentially
do that with. And this is like the most incredible thing I think about AI because think about this, okay. Global warming, cancer, aging okay. People have to die at 90 at 180, 90, 100. We describe all the reasons that we're dying at 89 to 100. And we say, look, we want to live longer. We want to live to 150. How can we live to 150? And we tell it to go and crunch and do what Einstein did, but for aging, and you give it all the genomes, you give it all the illnesses, you give it all the
doctor reports, you give it all the autopsies, and it's like, okay, cool. Well, have you thought of this? If you do this to the mitochondria, it'll slow this down and you'll live to 180 years old. And it could just churn out these things and even better, churn out testing for those things. So it could be like, look, I need a lab with the following, you know, beakers and petri dishes and we need to combine these different things. And I think
I could make a drug that'll do this. You have to be careful, though, when you start building things that the AI asks you to build. But that's a separate talk show. But just think of that automated thinking combined with experimentation to to yield a result. I think it's super exciting. Okay, meticulous planning and luck led to capturing a breathtaking eclipse photo. This is the advantage of doing video. I'm going to open it up. Look at this thing.
Look at this thing. Is that ridiculous? Like the wing is just like perfectly there and you've got the flare around it. I mean, that is that's gorgeous. Margaret MacDonald argues philosophical theories are more like artful stories than scientific facts. Quite a good essay. Here you are, what you read, even if you don't always remember it. I call this osmotic learning, which is basically just it absorbs into you in a way that's not attributable to the book or
the video that you got it from. It just it soaks into you. Right. And I think I've read close to like 800 books and I feel myself getting smarter, but I can't point to why my models are being updated, which I call algorithmic learning. If you actually say Huberman said this, therefore I'm putting that into an algorithm to update what I do in the morning for my routine. That's algorithmic. But if you just consume the video, you don't write anything down, but you find yourself changing over time.
I call that osmotic for like osmosis seeps in. And turns out the reason you're not landing that job in 2024 might be because of ghost jobs, which are like basically fake jobs that nobody's applying for, and they're not real jobs. And you can actually get it. And got an argument here that your personality and creative application strategies can actually land you a job. So they're basically saying
it's not about skills, it's about personality and creativity. And I think I've been talking about this quite a bit. It's about how you broadcast yourself. It's about how you present yourself. This is why I'm doing the whole human 3.0 thing. So I definitely agree with this. I would characterize it a little differently, but yeah, pretty much right.
All right. Continuous recording within ideas and analysis. So I think we're heading towards a place where everything is recorded, or at least where you can expect someone in a public space is probably recording you. And I like what that does for getting value out of conversations. But oftentimes the value is just having the talk in the first place and having it with somebody who you actually trust.
So I worry a lot about the fact that people might be more guarded now when having conversations, just assuming that it is being recorded. So I expect there to be some sort of conversation indicator, like a red light or a blue light or something, or a green light, or a visual indicator on somebody's person that displays one that. They are recording or two, that they're okay with being recorded, or that they are not okay with being recorded. And I think we're going to have some sort of visual
protocol like this indicates someone's comfort level. And I think venues will also have that, like you'll walk in and it'll be like a red light, like a stop sign or something, and it'll be like, not cool to record inside this building or Starbucks maybe does that for all internal, you know, Starbucks or whatever. Now question of enforcement that's separate. But the point is, the social contract I think is going to be there. It's going to need to be.
So I think it's going to be interesting to see how that all affects how we interact with each other. One option is we stop talking and being like fresh and open with each other. Another option is we stop caring because there's no other ways to be embarrassed or canceled, because it's already happened to everyone already. And another option is that we break into pockets. So some places, like nobody's embarrassed about anything. Everyone's recording, everything is being transcribed.
It's just radical transparency. And in other places it'll be like maybe Germany or like the Amish or something. It's like, no tech, no recording, complete privacy. And, you know, just like the polar opposite. And you'll have lots of different societies in between. Recommendation of the week. I recommend the book of the week or actually book of the month, actually The Courage to Be Disliked. This was the book that we just did the book club on. It was
the book for April and it was wonderful. Absolutely loved it. Courage to be disliked. And it's about the teachings of Adler. It's called Adlerian philosophy or Adlerian psychology. Highly recommend the book and the aphorism of the week be who you are and say what you feel, because those who mind don't matter and those who matter don't mind. Be who you are and say what you feel, because those who
mind don't matter and those who matter don't mind. Unsupervised learning is produced and edited by Daniel Meisler on a Neumann U87 AI microphone using Hindenburg. Intro and outro music is by zombie with the why and to get the text and links from this episode, sign up for the newsletter version of the show at Daniel meisler.com/newsletter. We'll see you next time.