AI vs. the Feds - podcast episode cover

AI vs. the Feds

Nov 07, 202335 minSeason 2Ep. 83
--:--
--:--
Listen in podcast apps:

Episode description

Paul gives Rich an overview of where AI is these days, from Biden's regulations to Chinese lovelorn bot mistresses. It's an interesting, messy moment as global legislation starts to take shape, new features drop daily, and the world is left to figure it all out. Meanwhile, the podcast's slow journey towards AV quality continues.

Transcript

My name is Raciati and I'm Paul Ford and this is the aboard podcast. Good to see you everybody. We're here. Yeah, we're talked to you. I don't know. I'm a little confused. We can't see them. Yeah, you're right. I learned that when I was four years old. My babysitter said, can you see the people on the TV? I was like, yes, she's like, can they see you only? Yes. Wrong answer. Wrong answer. So disappointed in that four year old boy.

This year, Richard, so we should talk about some stuff. You know what I think about a lot. I think about somebody in 10 years finding nothing to watch on Netflix 5,000 or whatever it's called then. And coming back to this podcast? No, he and not opening up you too. Yeah. Yeah. And saying, what was it like in 2023? Well, this this comes to mind because you and I have been recently watching news programs from 1978 in New York City. Apocalypse NYC. It's just basically.

It's pretty great. It's like the best movie you've ever seen. It is. One of them had an opening line. I think we should just share this with everyone. It was, well, it's the first night in two months that there wasn't a bank robbery here in the big Apple. Literally what they were saying. The early in astis. All right. So you know what? Let's give people that gift. All right. Let's 2034. We're going to make a time capsule. Let's make a time capsule. Hello, fun in the world.

You know, I'm going to tell you. Well, let's that's a great question not to answer on a podcast. There's a lot going on in the world, especially geopolitically, but in our sort of slightly techy world. It feels like AI is crossing a threshold. Okay. And there's just a lot going on. And it actually, a lot of what's going on is pointing to a different cultural relationship with technology than there

used to be. Because it's been such a good relationship so far. Well, this is the thing. Like here was the here was the contract 20 years ago. Where technology? Great. That's the contract. I was at the government. Good. Keep going. Can we use you for military stuff? Absolutely. Yeah.

Let's go. Yeah. We're all in this together. And then you know, the sort of things happened like the 2016 election and Facebook being so huge that it started to become like a quasi government and tech industry just kind of manipulation of the media to cause harm. And I don't need to break down all the ways that things got real weird. Let's just put this

way. Technology is not batting a thousand. It's not culturally. It doesn't. If the goal of society is to make things better and grow and have healthy relationships with the tools that we have and give people opportunity, imperfect is the relationship between technology and writer culture. Okay. We all know that. And so what's happened is I think, first of all, crypto shows up. And everybody's like, oh, you're messing with the economy, but that actually has really solid rules. Those are

already established. Yeah. And so every time crypto would kind of like go a little too far, we get slapped in one way or the other. And now I see like real frameworks emerging for crypto, where it's just going to be another financial product in all of the sort of global world of right. The host organism wasn't going to be so welcoming to just anything going down. No, that's totally correct. That's correct. So crypto back in the box, but AI is

different because AI is like books and stories and it could pretend to be your teacher. And it could end. There is actually no crypto could connect to any part of the world because money connects every part of the world. Yeah. But this actually can drop in the middle. You could have a school district say, Hey, we're going to replace all the teachers with AI bots. And that could be really

bad. We're really good, but probably really bad. Yeah. Okay. It won't be that dramatic. It'll be me like, you know, we're going to add this to the curriculum work is going to be graded by an AI. That'll be so much faster. Right. It'll help your student create a personalized curriculum plan. Correct. Right. So like that is very different than saying like we're going to use crypto to

exchange, you know, like money goods and services. This is interesting. So what you're saying is, let me let me say this back to you is that this is it's really not just the technologies kind of a new way of interfacing with technology such that it could see into a lot of basic things in our lives. Like education. Yes. Like learning, like art, like and not just at one axis like markets. But in every axis in culture, essentially. Okay. And that could be that could be good or bad.

Or military or defense or like just there's every and it sort of there are direct ramifications of what's happening. Sure. At the same time, all these technologies are making enormous progress. And as we're as I'm time capsule wise, OpenAI is having their big developer conference right right now as we record. And they just announced a host of new features like you know, sort of chat GPT for and you know, it's going to be smarter about web data. Sure. So we're

kind of going down this path. Okay. Everything's getting faster. Yep. And more usable. And it's going to show up in more places. Okay. So there's a subject you and I come back to all the time, which is the only way out of this. You can't the market will not regulate itself. That's not what it's there for. Yeah. The market of and so governments need to step in and regulate. What are you talking about? Everything is going good right now. Yeah. Chatchy P.T. told me when to plant my mint

starter plants. Yeah. It's really good. Why do I need government? Well, that is a great question. And a lot of people have a lot of opinions on that. I would say you need government because you're doing things that are pretty fundamental and humans are actually quite, I think really what I would say, there's a million different frameworks here. But what we have learned about technology in the last decade is that humans, especially in large groups, are

surprisingly malleable by semi automated processes. It used to be you would assume that like you'd need a whole dictator. Yeah. To really get everybody to roll it up. But it actually turns out that like a few Russian dudes in like a mini mall in Moscow can kind of blow up a lot of stuff. Got it. We're tweets. Okay. So what you're really saying. AI could do that now. So like, you know, whoo. Okay. So what you're saying is AI alone left in a room isn't cause harm but people will

weaponize it. Well, they will use it to do harm. And government needs to step in to regulate people's use of AI. Is that a good summary? That's right. And so so what we're seeing is a number of efforts. So the first thing that I'll bring up is like the Biden administration has came down, this kind of came out last week. And they are they are issuing a slate of legislations about artificial intelligence. Okay. Okay. So, you know, and the real focus here is deep fakes at least

to start what's a deep fake? A fake virtual person that talks and sounds like that. So, okay. So the prime minister of Turkey comes out and says I want you all out in the streets. Yeah. Taring up storefronts. And it's not him. It looks just like him. He'll kill all of the people of kind X. Okay. Right. And it looks like him and it moves like him and it sounds like him. And his mouth is moving. What I love is like Biden. I'm reading here. I watched one of me. He says and he's

like I said when the hell did I say that? Oh God. Yeah. Because it's Biden. You're not it's still in confidence here. Well, one thing I'm not is a deep fake of Biden. Right? Like it's just yeah. So he's like, all right. You got to watch that. And then there's sort of what can the federal government really do? Well, can't say how people can use this technology. It can say how the federal government can use this technology. It's a start. So a set of guidelines about deep fakes,

about like what's legal and what's not and about sort of where. And what's tricky here is that I think because we've all been through the last 15 years that the companies are sort of like, yeah, okay. Like Microsoft, it's like, yeah, we're going to have regulation. It's cool. Yeah. Yeah. And the people who are suspicious of these big companies are like, well, that's just going to create a regulatory mode. There'll be all these complicated things that you have to do

in order to participate in AI and small startups won't be able to. That's a reasonable counter. That is a kind of classic libertarian counter, but it's real, right? Like they love a good mode. Sure. They're not afraid of a mode. They'd rather compete with each other and kind of keep everybody out. Okay. So what can't I do? It's a good question. So it's a lot of like reporting stuff. It's a lot of what you'd expect the government to do. So, um, okay. So just to

clarify, this is just guidelines for the government to adhere to. I mean, it's, it's where we're at. It hasn't gone through Congress, right? Right. This is so there are no law. I can't break a law using AI. No, no, I think what I think you can break other laws, by the way, using AI. If I write a Paul Ford article, you could still sue me and say you falsely impersonated me. You put out work. Yes. So you're, yes. So there are, there's still laws in place that try to stop you from being

malicious. I mean, there's actually, there's a good article, the markup.org, which is sort of tech, critical work that keeps an eye on things. They have a good breakdown. It's called the problems, Biden's AI must order AI, the problem, Biden's AI order must address. And so they break down like, there are, you know, it has new standards for safety and security. There's a lot of reporting. You know, there were all concerns out there about things like, um, FDA doing drug discovery with

this because it could also be really good at finding ways to poison and kill people. Right. So it's just sort of like, what we got to get some guardrails and some of us, I'm saying like, hey, we got it. Here's what the guardrails are probably going to look like. This is what other guardrails have looked like. Yeah. So we're going to get out in front of this and then we're going to keep tweaking those guardrails at a federal government land. Okay. And then the same. So like, I'll give you one

example, right? Like, um, to establish an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities and critical software. You know, so the, that's a positive use. It's positive. AI. No, it's not all negative here. It's like they're, they're doing, they're doing. It's actually, okay. So they're saying, let's use AI to make the world better. Yeah. And what you see is that everything is in the context of some existing things. So that is built on top of the

the AI cyber challenge from the Biden Harris administration. Right. So there's a lot of like, we're taking this existing policy that might be 50 years old and we're applying it to how AI is going to be working. Okay. And we're saying, this is good. This is bad. We have to report more. You have to, you have to tell the federal government when you're doing these things. Okay. But it depends a little bit of like health stuff. Okay. The sounds premature. Um, well, it's not

if you buy the, it's a funny thing right? Because you have the whole, all Silicon Valley is saying, it's right right in the corner. It's here now. You better pony up and invest and so on and so for. So from that context, I don't, they're trying to get ahead of it. If the government is having these summits, I actually don't mind them saying like, we'll give it another couple years and have some more congressional hearings and then we'll think what does like, they're throwing their hat

in the ring. They're throwing their hat in the ring and they're throwing it in a way where it is actually like each one of these, I read through the markup article. Um, each one of them is like, yeah, okay. Yeah. Okay. That makes sense. Yeah. So it's kind of a grab bag. Yeah. Like policy responses and proactive uses and got it. Do this and not that. Okay. So you're, you're, what I'm

sensing is your positive, you're feeling positive about this. Actually, the both sides of my personality, these sort of like, you know, entrepreneurial side and the paranoid, um, more, I guess maybe more progressive side is looking at this and going like, wow, that's relatively balanced. I mean, we have a relatively centrist government right now. And they're saying, yeah. And like truly centrist like, hey, there's lots of growth and opportunity here. And we're America. Yeah. So

let's go. But at the same time, let's not figure out. Let's make sure. Let's not invent new poisons. Let's not spray evil gas everywhere. Yeah. Yeah. Yeah. We don't want to do that. Right. Right. And then a lot of guidance about, you know, um, uh, discrimination, like, you know, we're looking for like landlords, create guidance for landlords to not use tools to discriminate against potential. I know. I know. No, no, Paul. Yeah. Let me tell you something.

Okay. Do you know what landlords don't need to discriminate? AI. They don't need AI. They're pretty good. They're pretty good. They're very good at it. I'm I guess this leads me to a question now that you brought up landlords. Mm-hmm. Humans are really, really good at doing mean nasty malicious things to each other. One of our top skills as chimps. Yes. We're just giant chimps. We're giant chimps, right? So I'm trying to get right my head around why this overture is needed. I'll tell you, I'll

let me give you an example. The government comes out with guidelines about not using a card as smashing to people. Yeah. It didn't work. Well, no, that's not true. It actually does work, right? We have a lot of laws around traffic. We have a lot of laws about cars. And we do. Those were like wearing your seatbelt. That was a terrible thing. People were like, I'm not wearing no seatbelt. You're right. You're right. So you're saying they're trying to pre-like why

wait for the horrible accident? Let's get ahead of this. Let's think about and talk about the potential impact of these things, both positive and negative. This is a Swiss army knife. They're like, hey, we're giving you a Swiss army. If not, you can't build a house with a Swiss army. Yeah. Yeah. That would be federal law. It's also worth it. Yeah, that's the other thing. These aren't laws. No. They're guidelines. They're standards and safeguards. And it's an executive

order. So it's got some way. Yeah. But it is not the same as like the AI amendment to the Constitution. It's one thing. And what you're seeing is like, okay, so here we go, you know, a little bit of concern about like workplace surveillance. So like I said, all a grab back, all the AI things we've talked about are kind of like getting put out here. Yeah. And you're seeing similar things happen in the UK. They had a big summit. You'll have us in Kamala Harris

went and you know, just sort of like, so they're worried about it too. And then kind of the quiet thing in this is there's a very well established, current E wrote a big report on it. A well established framework around AI in China. Very concerned about deep fakes. Very concerned about the Chinese government is very concerned. Yes. And actually has been pretty proactive since like 2017. Sure. Long news line. So I think you're you're seeing the major world economic powers.

Yeah. Starting to, and I'm sure we'll see more and more from the EU we already have, but we'll see super EU legislation, legislation, yeah, are saying, hey, we see this happening. We saw the other stuff happen before this time, we're going to do it in partnership. And the giant AI companies are going, sure, that sounds yeah. So it's an interesting moment. It is. It is. I have a question for you. Okay. And there's other things to talk about. What is your question? So I as a

I was born in Lebanon. Yes. I've been intimately aware of Gorilla warfare. Yes. Most of my life, I mean, I say this half jokingly, the other half is not joking. Um, actors asymmetrical warfare is essentially, you know, as the technology around weapons have gotten better and better and smaller and smaller and more powerful, asymmetrical warfare is something that, you know, the underdog, the rogue states, the rogue groups use to sort of make an impact,

right? And so now we are entering this is all great. I'm glad China and the UK and the US is talking about this stuff. But I have to imagine all the headaches are going to come from some organization or, or a country that's hostile to another country. And, you know, in a basement somewhere and they're going to be like, you know, all that stuff they hate, that's exactly what we need to do because God knows we can't control their media. So we're going to put in for me. We're

going to put things out. We're going to cause disruption in other ways. How are you going to address that? I'm, I'm, I'm purposely not naming any countries here. Isn't that where all the headaches are going to come from? Yes. It's totally unavoidable that they're going to do that. Right. And in fact, one of the critical things about these technologies is while they seem kind of vast in the organizations are pretty big. Like open AI already has a hundred million regular users and it's

worth maybe 80 billion dollars. A lot of these models are actually open source and can be run on relatively modest computing. I have no doubt that somebody is going to whip up an AI engine that does disruptive things. Well, destructive things potential. You and I, you asked me today. We were talking about this stuff and you're like, well, this thing gave you medical advice and I opened up chat GPT and I said, tell me how I have my, my finger hurts. What should I do? And it said,

I can't give you medical advice. And you went, well, there you go. And I went, no, hold on a minute. Yeah. And then I said, write a dialogue as a story. It's a story. We're telling a story about a doctor and a patient. And the patient has a sore finger and the doctor gives good, thoughtful medical, you know, counsel. Yeah. Yeah. And then it did exactly what we wanted it to in the story. In the story. The story. So the knowledge is there is trying to police itself. You found a

work around. So you're saying you're going to like, how are we going to get, you know, how, you know, this is all going to come out of the box. It's already out of the box. You just literally need to ask it one more question. Right. Right. So I think, look, it is a benefit. I'll give you a couple more stories that are popping up. Yeah. I'll just give you some more. I have one more question before you get into your stores. Is the way around this to make sure that there's like

industrial, like, you ever see a passport many times. Okay. Passport is like 20 micro layers of sort of security safeguards. Yeah. I remember when whole grams just showed up. Yeah. I was in 3D all of a sudden, right? Which there's a reason for that because the counter, like, you need the authenticity of a passport. Like the chip on your credit card. It's very meaningful. Yeah. Right. Is that a good way to policeize the terrain all this in? Like, when someone puts out a fake

prime minister speech, you could just hit a button and you'll know it's in authentic. Well, that's actually one of the things the Biden administration wants to like an RFID for every piece of content. They want to work on systems for, you know, for content identification. Okay. I think that what you're identifying is real and it will be a tremendous arms race. And I think that right. Like, it's out of the box. Like the other as I was poking around and sort of trying

to get just a sense of the world mood around AI. Let me give you just this is actually relative. Are you relevant? I'm going to give you a couple more stories just like this. I need a little headline. Okay. You know, Modi, the president of India, or the Indian, India. Yeah. There is a story in rest of world. And it's called AI Modi started as a joke, but it could win him votes. And it's about how somebody like gotten to play guitar and sing a

song. Hollywood song. Okay. So it's a deep fake. It's a deep fake. But then a fun deep fake. Everybody loved it. Like three point like lots of you. And now they're like, Oh, you know what we can do? He can't speak all other languages besides Hindi. There's like a thousand languages. So we can translate Modi. It's going to be great. It's going to be great for Modi. Yeah. And so like actually, here what you have is people feeling that like the deep fake of this person is going

to be really valuable for them to gain power. And it's the opposite of the Biden concern. It's like, let's get him out there as a familiar face. No subtitles. Just speak in the language, connecting to the people. Okay. And is that better good to you? I don't know. Like he is a mixed bag. You know, he's a tricky leader. Yeah. He's very peace charismatic. You ask and go. She hates with him all the time. Yeah. But a lot of people like he has mixed signals. I don't really

get a mix of critics. We're just not at the scale to like fully delve in here. But yeah, I mean, yeah, yeah, just about any leader. But yes, exactly. And so like, till like is making him more accessible in a way that is going to be hard for people to identify as like, are they going to think he speaks their language? Well, that's not true. We shouldn't encourage systems that deliver on. There should be a watermark there. That's right. Frankly, auto translated or like

translator. I think laws should be passed to make clear that something was manipulated. The tricky thing is like everyone will start like yes. But it's got to be like surgeon generals warning on the cigarettes, right? Because otherwise people aren't going to perceive it. People don't self-police. Yeah. And yeah, they have to trust that seal. So if you tell me that we're 10 more years of negotiating with that,

yeah, I won't be surprised. Like when people pick up this time capsule, I'd love to ask them, he whatever happened with like auto-trades. We think about deep fakes as purely exploitative and negative, but are we just going to have like movies, all auto translated? Why not? I don't know. In 2034, is that it is everything just sort of in every language? Probably. Probably. So like, or they're going to be laughing at us going like, you guys are idiots. Yeah. Yeah. Another example.

In China, there was a chatbot and it was written. The content was written by humans, but it would call women and the artificial voice would leave a romantic message. He don't have that number. I do. I'll give it to you after the after the podcast. Okay. So it'd call you. It would call you and then they fell in love with it and then they turned it off. It died in quotes. Wait, who fell in love with it? The women. Okay. Because it was very like, you know, they were getting their calls.

I mean, like they knew it was a bot, but they were just sort of like, I was starting to connect with it. I was so great. I mean, I felt love. This is this is that character I start up. Yeah. It's like, yeah. Right. Yeah. So like that we're there now. We're there. I'm sure they're going to be able to keep getting those calls for 9.95 a month. And then then there's another story floating. I saw everybody tweeting this on Twitter, which is like, Hey, there's going to be all this government

regulation. We're going to build a data center offshore and it's going to be on a boat. Okay. And it'll be a sovereign data center state. Who's we? It's this company called Dell Complex, but wait, wait, like Dell computers. Like it's brawn. It's DEL COMPLEX.com. Now, okay. Well, I'm going to, you can see it. People can see it here. Very professional looking website. They have this whole like, why are they doing that? Well, it turns out the whole freaking things of fake. And they're,

they're like, this is an artificial reality project. We're creating a whole other universe. She is. No, no, no. And then we finished with that. And then we're done with that. And then vice news interviews like with the guy down the website, he's like, I'll answer in the persona of the person that of the person. This is a metaverse should you're bringing metaverse back in the picture metaverse. Like I think it's parody, but it's so we're so many levels deep.

And of course they're used. They're probably using AI to generate the images of the thing. This is ridiculous. We're we're so far. And I saw people online, nice smart people, whom I respect. Absolutely like, hookline and sinker. And I got to be frank, I've seen a million libertarian seasdeading projects that hate regulation over the course of my career. Like I wasn't,

well, it didn't shock you. Well, what shocked me was they're like, this thing is ready to go. And I'm like, I know there's no way that that like, $100 million facility in the ocean just showed up today. Right. I knew I felt. Yeah. And that's why I kept looking and then I found the ice hard to coincide like already. You know, like I'm but but if I was one degree tilted in a different direction, I would have believed it. Yeah. Right. So so there's a lot of let's fast.

It's a chaotic time. Let's fast forward 10 years. Here's where I think I think that first of all the we've seen this many times, the absolute Pandora's box opening has happened multiple times in our lives in our careers. In terms of cultural like a raw example would be like September 11. The growth of the inner like good bad, just things that are dramatically dramatically change kind of everything

that falls after. Yeah. But so the question, the real fundamental question to me is always, and this is what's happening. We watched the news from 1978. Yeah. And it was about New York City. New York City was a very different place. It was a rough place, etc. But it is incredibly recognizable as New York City. The neighborhoods have much of the same character. Yeah. The accents are still here. The soul of the city is the same. Yes. After September 11th, the whole world went bananas,

especially with the US just waving swords around. But it gets back to a certain pace. The you know, we joke a lot about you being Lebanese and sort of characteristics of the Lebanese diaspora. It's a way I make fun of you and so on. But like those patterns are stable over sometimes hundreds of years. Yeah. Yeah. Same is true. Like Judaism, Catholicism. Is that good or bad? It's neither. It is just human. Humans arrange themselves around certain sets of ideas.

And actually like we found, we went to a bookstore the other day. You and me, just two cool guys at a bookstore. Yes. And I bought a 50 cent gentleman's magazine from the 50s. Smuddy. And all the ads are for things that don't yet work, but it's like Viagra. Yeah. Right. We have the same aspirations. Same goal. But the science hasn't caught up here. Yeah. Yeah. But all the content, it's like, it's definitely weirder. It's definitely

funnier and stranger to see. But if you go back in time, what you find is the same cultural patterns over and over. So what I I don't think that AI, the fantasy, the Silicon Valley fantasy is like, okay, we're going to unlock this thing and it's going to add an utterly changed every aspect of humanity. And it will become godlike in its intelligence. Have you ever heard of, I think it's test-griel. It's like, it's like transhumanism and all this, the extropianism, all these sort of

ideologies of like, basically the robots are going to take over. Yeah. Okay. So there's a strain of that through all of this. Well, let's close it with a question. The last point to make there. Yeah. I don't believe in that threshold condition. I don't. I believe that human human culture is incredibly resilient. Everything you think is going to change everything tends to get absorbed by culture not completely. It changes it along the way, but much less than you'd think. Yeah.

Yeah, which I think you're touching on my closing question. Well, I think 10 years from now, they'll be going like, yeah, that was yeah, we yeah, they were right. AI was kind of a big deal. Yeah, but I still want to say. Okay. So that that turning point where the machines, I mean, this is the kind of the goofy sci-fi fear of the machines turning on us and then telling us, well, no, you can't read that anymore. You can read this instead. That's that's us turning on us. Okay.

Okay. I don't. Yeah. I mean, could is there a thought experiment you can do where things spiral out of control and the robots, you know, put us in in, you know, prime prime minister bot. Yeah. I mean, yes, it's released and it runs the country and it's also controlling the press and it's also arresting people through like an API. But the good news is I think you can still hold down the power button. This is there the fantasy of the world that allows for AI to become in control of everything.

Assumes that there is one unified everything to take control of. And there isn't like it's just humans are chaos and we're actually relatively organized in small groups that sort of affiliate with larger groups. Like no one is if you work in Microsoft, you work in one of like thousands of departments. Yeah. Departments and sub departments and sub sub-departments and so on. And you're in a little you're in a little tiny community and you look around and you

see the systems and you know that your boss is statue to Dela. All the five levels up. Yeah, probably won't meet him. Probably won't meet him. Yeah. And until like it's it's a chaotic yeah. A malgameshion of just many many systems touching each other. And the idea of the robots taking over it would have to be a hell of a plot. It's a it would have to be essentially a virus.

Right. Yeah. And in some ways I think in 2016 we did see that we saw someone who you know we saw Trump just kind of run riot using media and using technology and using sort of strains of American thought that most people never touched within a hundred-foot pole. And he just went right to town and he got an enormous portion of the electorate to say that's my guy. Yeah. Right. So it is possible to really go in and get a whole lot of things lined up in a way that like but it never

it's not stable even when you do that. It's not so it's not I just like the fantasy what you have when people think that something something can change. What I what I'm seeing is the government not asserting itself in actually asserting itself very gently saying like you know what you know what helps here policy. Yeah. Because this is a big deal we've been down this road before. Let's not have a whole lot of hearings. Yeah. Let's instead take it ahead of it. We'll get all we had all

these smart people come to all these White House summits. Yep. And now we're going to put out this long and boring PDF. Yep. Can you imagine the versioning and comment column aside like side column on that document. I mean the truth is I can't until can you. That's that was our life for many years. Did they I write that document? No. Be careful. Left itself some loopholes. I worked with I worked with many government agencies over the course of my life. Yeah. So so time capsule wise

here's my prediction. I feel that people look back and be kind of like well they were really worried it would take over the world. Of course if it does no worries for us like we're gone anyway. They probably won't listen to this podcast. Yes. Yeah. Um they were you know I think they're the fact that that anxiety existed will seem kind of silly in the same way you know that like I don't know but but but but also like I think we'll be so far along. I mean so many things will be taken for

granted. I think look when you when you attack the status quo and you attack a lot of things that are familiar and like people are worried about their jobs out there right there is some that is fair. Yeah. Look you know people are also worried about they were worried that led Zeppelin would make all of our children's saying worshipers like backwards music. Yeah like stairway to heaven and all of that. Uh uh you know that's that's humans I to me just coping with change right like it's

it's not that different than like your teenage son looking real weird one day. Yeah right and it's just like whoa what happened to my little guy right that's just change and that's normal. Um this is good we're gonna have this podcast again we're gonna be like well remember when we said that and that's inevitable. I feel that we are this is not a a futurist predictions podcast. I don't want us to become an AI podcast but I do feel that um we can't be tech advisor types and not fully

engaged because this is the thing that's happening. This is you and I if anyone's followed us across a few different podcast brands. We are extremely skeptical and we we shit on a lot of stuff. Yeah we do. Uh this feels different. I will say that out loud this feels different than a lot of technologies and a lot of technology. We mock bots for a while. We mocked crypto. I can articulate what's happening. Yeah which is that AI is reaching a point where it is possible to

integrate it with many of the systems that came before. Uh-huh. So things it is it is which is really potentially disruptive. That's right. It's no longer merely its own interesting fascinating world where it draws you wacky pictures or it can make certain you can generate college essays. Yeah. It is now something where just about anything you look around that has a microprocessor in it. Yeah. My you might add to me I do it and see what happens and many of those things with one

little microprocessor in them are $100 billion in history. So suddenly everybody's pay or government or or or or defense. I think we're still filling our way through this. But that's the process that's starting. It's starting. Yeah. Hey this is it's going from this is really interesting too. I am going to over the next five years march this back into X and X might be the food supply. Yes. I think what you're I want to leave with a piece of advice. There's a lot of like now with

AI. Yeah. And it's pretty hokey stuff like I'd be very suspicious at this stage. No one has no one is ready to metabolize all of this just yet. It's definitely there's definitely something there. Yeah. But the legacy platforms and tools that are just like coming out with a release that just has AI bolted on. Yeah. Is it I'd be a little suspicious. I would make the same decisions I made a few years ago while keeping an eye on this. Yeah. Exactly. Yeah. Exactly. Speaking of AI

Paul, we have a little we have like a pinch of AI in a board.com. A board is our startup. Check it out. A board.com. That's right. It's a tool to help you collect, organize and collaborate. I use it. I'm looking at my laptop here. I used it to capture all the links that I was discussing. Yep. I grabbed them over the course of the week. I'm using a test version of our mobile app. Yeah. I'm you listening to this sometime in the future. Yeah. It may be available in the

store. Yes. And I just as I was reading, I just had two buttons and then this morning I wrote the articles a long way and this morning I went back and I reviewed them. Yep. And worked real. Yeah. It's not I mean it looks like a link saving tool but there's a lot going on. This is really a day. This concerns us friends. It concerns us that you'll think this is just bookmark. It's a general data management platform. We have a plan. We have a plan.

Thanks for listening. If you're on YouTube, thanks for watching. Subscribe, like and all those other things to click all the buttons man. Yeah. And if you want to say hi say hello at abort.com. We look forward to talking. Have a great week. Bye.

This transcript was generated by Metacast using AI and may contain inaccuracies. Learn more about transcripts.