Models for society when humans have zero economic value, with Jeff LaPorte - podcast episode cover

Models for society when humans have zero economic value, with Jeff LaPorte

Jan 02, 202542 minSeason 1Ep. 105
--:--
--:--
Listen in podcast apps:

Summary

Jeff LaPorte discusses the future impact of AI on human economic value and proposes 'HumaneRank' as a solution for distributing societal surplus in a world where most jobs are automated. The conversation explores the challenges of technological unemployment, the need for meaningful roles beyond work, and various approaches to managing the transition to an AI-driven economy, emphasizing the importance of foresight and proactive policy-making. The discussion also covers potential pitfalls and alternative visions for a post-scarcity society.

Episode description

Our guest in this episode is Jeff LaPorte, a software engineer, entrepreneur and investor based in Vancouver, who writes Road to Artificia, a newsletter about discovering the principles of post‑AI societies.

Calum recently came across Jeff's article “Valuing Humans in the Age of Superintelligence: HumaneRank” and thought it had some good, original ideas, so we wanted to invite Jeff onto the podcast and explore them.

Selected follow-ups:


Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

Promoguy Talk Pills
Agency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...

Listen on: Apple Podcasts   Spotify

Transcript

Welcome back to the London Futurist Podcast. Our guest in this episode is Geoff Laporte, a software engineer, entrepreneur and investor based in Vancouver. who writes Road to Artificia, a newsletter about discovering the principles of post-AI societies. I recently came across his article, Valuing Humans in the Age of Superintelligence, Humane Rank.

and thought it had some good original ideas. So I wanted to invite Jeff onto the podcast and explore them. Jeff, thank you for joining us on the London Futurist podcast. Thanks very much, Carlo. And thanks, David, as well. It's a pleasure to meet you, Jeff. So Jeff, before we get into the ideas in your newsletter, tell us a bit about your background and experience. Sure. I've spent my career building and leading teams and companies in the software space.

As a student, I studied computer science and physics. After a few years in industry, I realized I had a strong entrepreneurial impulse and started my first company in the mobile space. Since then, my professional focus has been in cloud tech payments and AI. My involvement in the AI and machine learning space started in 2016. Lately, I've been increasingly focused on the issues society is facing and will face due to AI over the next several years. Which is very much David and my sweet spot.

So let's turn to the article I mentioned in the introduction, Valuing Humans in the Age of Superintelligence Humane Rank. You start with the claim that at some point in the future, nearly all human intellectual output will be outpriced and outperformed by AI. resulting in most humans having zero economic value. I think that's true, and I think it's fairly obviously true if AI continues to improve. It's a matter of when, not whether. But very few people take the idea seriously.

Do you think that's just because it's a scary thought or is there a more sensible reason why people don't really face up to it? I think there are a couple of reasons. The first is psychological. AGI really undermines very basic aspects of people's reality and self-concept, so it's very ripe for people to opt for blissful ignorance.

The alternative is to live with some level of cognitive dissonance and most people resolve that by dismissing the new information. And second, people are using a kind of heuristic. Society doesn't usually get turned upside down. So I'm going to have a bias towards the idea that nothing major will disrupt my world. So people are executing a mental model.

that says ignore claims or even evidence of black swans that are going to overturn society supposedly because those things just don't happen. I wouldn't call it sensible, but I think it's understandable. It's a mental model that will look like it's correct right up until it isn't. But major upheavals do take place, and we know this one is coming. A new world is going to be built, and we need more people thinking about this.

I personally found this problem of AI driving human economic value to zero really concerning because it's such an unavoidable consequence of the path we're on. I think job displacement could destabilize societies a lot faster than policymakers can react if they're not prepared. So we've got this great opportunity.

on the other side of the transition, but if society falls apart during the transition period, it could all be for naught. Yeah. Jeff, you pointed out two reasons people have for resisting this conclusion.

Have you found any good narratives or good examples that can help to change people's minds when they're in that state of denial? I'm in the process of trying to formulate some strategies like that I think we've seen some attempts at that in the climate change space, examination of what really moves people's opinions so that they take it more seriously.

and may support measures to increase sustainable energy production. I think that kind of strategy would be a good path for the AI space to take. For myself, I am hoping to publish a couple articles that are along that angle. I used to think that self-driving cars would be the thing that woke everybody up. I used to think that they are so amazing.

that once people saw these robots driving around themselves, they'd realised, well, if they can do that, then they can certainly do my job in some number of years. And two things happened. Firstly, self-driving cars have taken a lot, lot longer to arrive than expected.

And secondly, when they do arrive in a city, people go, oh, look, there's a self-driving car. And then they go back to thinking about what they're going to have for tea. It turns out even robots driving themselves around all over the place doesn't wake people up.

So I've kind of given up. I don't think people are going to wake up until it's almost upon us. But that doesn't mean to say that we shouldn't be thinking about what to do about it. I think it was Milton Friedman who said he thought his role in the world was to... elaborate some good ideas and then just leave them lying around in case they ever turned out to be useful in the future. That was when he was in the wilderness. And I think our role as futurists is quite often that.

It's to work out some scenarios and work out some solutions to those scenarios and then just leave them lying around in case anybody finds them useful in the future. Yeah. I think that humans themselves have an amazing ability to absorb change and then behave like things have always been that way. And we've seen how the goalposts get moved when it comes to AI.

That happens both in terms of capabilities and safety. We have the Turing test as the original goalpost, and our systems passed the Turing test some time ago. Now, it's true that the Turing test is very imperfect and we have better ways to evaluate AI models now, but major advances in capabilities that a few years ago we would not have expected to reach so quickly.

just get absorbed as ordinary reality like self-driving cars yeah i think also humans don't really understand exponential processes when they arrive in the real world as humans When we look at very complex systems, our understanding is inherently linear. We see a change to our system, society or economics, and we think, okay, that's just a perturbation to the system.

Say we increase trade barriers, then dot, dot, dot. OK, import volumes decline, trade partner economies contract, etc. This kind of linear thinking works for almost all the phenomena in our world. It's embedded into our intuition. And so we discount the idea that something radical could happen. So people disbelieve the information delivered to them about radical change. Yeah.

Yes, I think outside of a very small segment of society that's paying close attention, we're sleepwalking towards this radical change. I think that's absolutely right. We are sleepwalking towards an incredibly different future. Before we go into solutions, how we might cope with a world of technological unemployment, how do you think it's going to arrive? I guess I used to think the same as I think most people do, that different...

parts of the economy, different industries, different jobs would get automated one by one. And the people who got automated, some of them would get new jobs, but many of them would stay unemployed. And so you'd have a gradually increasing cohort of unemployed people. I no longer think that.

I think that as long as there are some jobs that machines can't yet do, then there'll be lots of work for all humans. And people are going to have to get retrained over and over and over again. And that's going to be an accelerating process. Then there will be quite a few people who get tired of it and just drop out and make their living however they can. But I think there'll be lots of jobs until the day comes when machines can do pretty much everything that we can do for money.

when our economic value for all of us is pretty much here. And then I think there'll be a very quick phase change. It's a bit like when you boil water, nothing seems to happen. And then all of a sudden the bubbles start flying and the whole thing is obviously boiling. So I think it'll be sudden. I think it might just be a matter of weeks or even months when we go from lots of employment for humans to no employment for humans. Do you have a view about how it will happen?

I'm not sure about which industries it's going to roll through first. I do think that software engineering is at the front of this change. There is already job displacement there. And I think it's going to spread through the knowledge economy first because robot technology is lagging AI by some number of years. But I think that long before...

we reach, say, a majority of people being displaced by AI. The core problem is going to be how do we manage that societal displacement? Because the folks that are displaced, we can give them a UBI. But I don't think that's sufficient. I think that it's very likely that people will be quite unhappy to be displaced from their jobs, even if their material needs are taken care of afterwards.

So I really think we need to be prepared by having certain roles that people can go into, not necessarily for the purposes of adding productivity to the economy, but... to give their own lives some meaning and prevent them from having a problem transitioning. I feel this is incredibly important because without having a positive narrative to share...

people will inevitably push back psychologically, possibly subconsciously, against this whole possibility. And so I don't think our only task as futurists to have ideas lying around. I think we have to develop more compelling narratives. Again, to compare with climate change, what has helped to change people away from lots of excuses for dirty energy. is the visibility of a path to a possible green economy, that people could have successful businesses with solar power and wind power and so on.

So there was a positive vision of the future there that could help people to embrace their transformation. So what is your vision for how people will have a meaningful life? Given that, as you say, just giving them money probably won't be that answer. The way I see this happening is that we do have a situation that will produce abundance.

But that abundance needs to be distributed somehow because it's not naturally going to distribute itself to humans in the money economy for the reasons we already touched on. So with that as our premise. The question becomes, what mechanism can we create that would accomplish this in a way that satisfies certain principles that I think we should all want? Those being preservation of the price signal mechanism in the money economy.

because we don't want to give up the efficiencies of market economy. The preservation of individual freedom. I should be free to buy and sell and have property. The preservation of human achievement. humans should be rewarded for contributions to society in whatever form others might find valuable, and that everybody should receive what they need for a decent life.

The Humane Rank proposal is intended to achieve those goals. As to the specific mechanics of the proposal, we start with a UBI. But for distribution of societal surplus beyond that, there should be a fair way to allocate that surplus. Luckily, we have an algorithm like that. It's the algorithm that launched Google called PageRank. For anyone that is not familiar with PageRank, PageRank simply assigns a score.

to web pages based on how many other pages link to them and how important those pages are. The great thing about PageRank is that the rankings are not decided by Google. They're decided by the combined opinions of the web. We can adapt this algorithm for the distribution of societal surplus. So take humans in place of the web pages and endorsement points in place of links.

Now everybody is free to distribute their endorsements in whatever way they wish. If you want to give them all to your sister that's fine. If you want to drip them out throughout your day to people that do something nice for you that's also fine. Every month, we recalculate the rankings in this endorsement graph. Then we distribute the UBI and distribute the remaining societal surplus in some proportion to the rank outcome.

So this result is fair and it was determined by the human population at large. And it also has some interesting properties. It has the dynamics of an iterated game in game theory sense. Iterated games have been shown to produce pro-social behavior. The motivation to grant your endorsement to someone is basically whether they've provided you utility or made you happy in some measure.

And finally, we've solved the core problem that humans would not be able to earn in the ordinary economy. We'll be right back after a quick break. If you spend way too much time online, first of all, same. And second, you need promo guy talk pills. I'm Rawal, and with Alex, we break down everything happening in tech, digital marketing, and the internet's latest chaos. No boring corporate talk, just real discussions with real opinions. So go on, press play, you know you want to.

Doesn't this run the very severe risk that you'll have a small minority of people who get voted some utility by many, many, many people in the way that pop stars are or football stars are? People will. maybe give all of their transferable credit, or at least a large chunk of it, to the people that are stars. So you'll have a small number of people who've got almost everything, and then everybody else is scrabbling around on virtually nothing. Isn't that a serious problem?

I think that in dynamical systems like this, it is common that the distribution that emerges is a power law distribution. However, we can choose to... change the proportional factor. So this is just producing the ranking, but the curve that we put over the ranking in terms of almost like progressive taxation can still be applied here.

Society's preferences in that sense, and that could be the result of democratic vote or an election, that changes that kind of proportional payment of the surplus to different people. It should be paid in a way that society feels comfortable with. I like this idea, and I like the comparison with Google's page rank, because after all...

How Google operate these days is a lot more sophisticated than the original PageRank algorithm. First of all, there was a whole bunch of people who tried to distort things with search engine optimization. And there's a whole bunch of clever people in Google who try to keep things as fair and transparent as possible. Secondly, the algorithm is adapted to what people actually click on. So the rankings of search finds alter.

not just from the historical links, but what people are choosing. I imagine there will be an ongoing conversation about this, and society will figure out what's an appropriate answer. But we have to start from somewhere. And your main rank looks like an interesting place to start from. So is it too early to say what the reaction has been to this? Have you got economists ringing you up asking how to apply this?

I have not, but the thought has crossed my mind that I should probably put this together in a more formal sense as a white paper or something. That is in my plans. Just to address the comparison with Black Hat SEO techniques, the dominant technique there is that the Black Hat SEO will create a whole bunch of web pages in a cluster. in order to send all of their ranking through links to the page that they want to boost. Now that actually can't happen in HumaneRank because you need to be a human.

And we know who the humans are. So you can't just spin up another thousand humans and ask them to send all of their endorsement points to you. So I think that HumaneRank is actually not subject to a lot of the more scam-oriented exploits that PageRank is. One of the issues I see with HumaneRank

I think it's a very interesting and innovative approach to the problem. But it jars with me. And it's probably significantly because of the Black Mirror episode called Nosedive, where a girl called Lacey Pound lives in a society where everybody rates each other. on a five-star social media system. And what rating you have directly impacts on your social standing, your access to services and opportunities, which sounds like humane track, really. And it's a dystopian story. It goes horribly wrong.

She falls foul of a few important influences and gets a terrible reputation and then just tumbles down through the system and just can't recover. And I forget how she ends up, but it's pretty grim. I do fear that people would end up voting on irrational grounds and would have some very bad effects. But maybe there could be ways of mitigating that and producing overall a wholesome process.

I've actually seen the Nosedive episode. That comparison is a comparison that some people have brought up already. I think the way I would answer that is that, first of all, my biggest goal with the piece was to make people look at the problem. First, the problem being that humans will not be able to earn their way in the money economy. So we need a way to distribute society surplus without turning to authoritarian systems. And that's quite important to me.

So for commenters that just say, oh, my God, there's a ranking system. I hate that. And stop at that point. And I don't think that's what you're doing, Callum. But to those people, I would say you haven't understood the problem. Because if you had, the problem itself should really worry you. And given the problem, majority or total human job displacement, if you're not putting forth an alternative solution, I can't really take you too seriously.

Now, as far as the comparison with Black Mirror and the China Social Credit system go, these systems are really the opposite of the Humane Rank proposal in that, A, they're confiscatory. They're taking things away from people. They penalize people by taking away rights or bank account balances or opportunities. And their purpose is to enforce control over individual behaviors that are not criminal.

and they're not civil offenses. So I think the resemblance is a bit more superficial than it might seem at first. And those systems... They're using a system of votes to impact financial outcomes. So there's a similarity there. But their goals and methods are otherwise in opposition to HumaneRank. And there's no sort of downranking.

abusive angle to humane rank. Okay. Let's come to my main contestation of the idea, which is that it isn't actually necessary. It seems to me that technological unemployment is going to happen. And when it does, As you said, and we skated over this a bit, probably should elaborate a bit. If we play our cards right, we will have an economy of abundance. The machines will be generating enormous amounts of value. They will easily be able to produce.

more than enough food, more than enough transport services, more than enough information services, there'll be an abundance of stuff available. And if we simply make it available, not for free, but for a very, very low price. And it will all be very, very cheap because energy will be very cheap and there's no humans involved in the production process. So that'll make it cheap and AI will make all the production processes very efficient. So everything's very cheap.

You can give people a fairly modest amount of money or some similar token, and they can buy enough stuff to give themselves a really good standard of living. It means taxing the people who own most of the assets.

Maybe there'll be a few people still working and doing the few remaining jobs that machines can't do, and those people will indeed need to be taxed. But the taxes won't be onerous because the stuff is so cheap. And so what you have is a society of most humans, the great majority of humans.

not earning an income in any way whatsoever, but having access to a very comfortable lifestyle. And I don't think that's a problem. I know that when people do take technological unemployment seriously, they don't go, oh, we'll all starve. They go...

oh, we won't have meaning. And I think that there are various kinds of people who show you don't need to have a job to have meaning. The examples I always give are aristocrats, who for centuries, most of them didn't have jobs, and they had the best lives of anybody in their society.

comfortably off retired people. And because I'm that age, I know quite a few people like this and they're very busy, they're very happy and they don't want another job. Thank you very much. And children, children do not believe you need a job to have a fulfilling life. So I think that we could all do very nicely indeed. If we had lives of leisure with abundance, I don't think we need to create artificial contests or artificial ways of establishing value among ourselves.

I do also think that we want to retain the possibility for people to get rich if they want to, and to engage in projects which produce value, specific values to themselves and their families. So I think you could have a minority of people trading in original Aston Martins or beach houses on a very prestigious beach or so on. I wouldn't want to do away with that. But I think most people

won't want to or won't have access to that sort of capability. But they'll live really, really good lives, very fulfilled lives and abundant lives. Yes, I think that... a primarily leisure-based lifestyle could work for a large segment of society, and I'm not anti-leisure. I'm just unsure if it can be the organizing principle of the future society. It might work.

But I'd feel more comfortable if we had a backup plan, especially during the transition period between the onset of serious job displacement and the stable state that we want to get to on the other side. If people don't have a new role to take on when they're displaced by AI, even if they receive the UBI, they may be pretty unhappy with the change.

So I think having roles available to give people continuing purpose needs to be part of the transition, along with UBI. Otherwise, I worry that we risk violence and social unrest. And even after the transition... There will be a segment of society that may still want to smash windows and burn cars, and they'll have more time to do it. I don't think that will completely go away, even once people's needs are completely met.

Yeah, I think that's a very good point. I too worry that the transition will be the most difficult time. I remember when I got interested in all this stuff way back in 1999, when I read Ray Kurzweil's Are We Spiritual Machines? And I thought, this is great. This image of the future is wonderful. But getting from here to there sounds really hairy. It's going to be bumpy as hell. And he didn't seem to be thinking too much about that. And it's a tough thing to think about.

And so I'm delighted that you're coming up with different approaches. My approach, which I wrote about in my book, The Economic Singularity, I call it fully automated luxury capitalism. It may well be an overly hopeful belief that the scenario could be just that everybody's happy with a life of leisure in a world of abundance, and that's good enough.

It's important, I think, to have a number of different solutions. And we probably have a sort of competitive world in which different countries will try out different solutions. And we'll see which one works out best. and hopefully without too many disastrous results in the countries that try out systems that don't work. Yeah, I think I agree with that. I'd like to see experimentation with systems.

I have to say that when I first started thinking about this problem of humans having zero economic value, it really stuck in my mind. It worried me. And I just didn't see any way to avoid it. And my concerns were around scenarios where if humans in a society truly have zero economic value. then that government could do pretty awful things with those people if there's no cost to the government or to the ruling system or elite.

I think it's really important that we face that issue head on. I'm not necessarily saying that HumaneRank will be it in the end. But we need more solutions to be proposed because if we don't prepare, if policymakers aren't prepared, I think we're really inviting a very difficult time. So one thing that's been happening recently is that two humans have been playing each other for the World Chess Championship crown. One 18 year old from India and a slightly older guy from China.

Now, both of them would be comprehensively defeated by any AI software. But my goodness, there's been so much interest, not just in India and in China, but worldwide in this game. The two of them are fascinating characters. I see this as a kind of example that it won't just be leisure in the world ahead. There will be challenges. When I play golf, I get frustrated when my ball goes in the bunker.

But I wouldn't ask for all bunkers to be removed from the golf courses because that would be a less interesting game. So I imagine that it won't just be lazy leisure, as in the film WALL-E, but AIs. ideally will be challenging us and will be encouraging us to push ourselves to various limits with and without the use of technology.

But I agree, this needs to be fleshed out, and it needs to be explained particularly for the transitional period, because when there is a sustainable superabundance in the future, things will be much easier. But in the interim period, when more and more people feel themselves to have no value, and as you say, when politicians no longer depend on soldiers or factory workers, society doesn't need them anymore. That is a frightening time.

So we do need to think a lot more about how do we engage larger groups of people in ways that will give them all the sense of meaning. So I welcome your analysis of this. One of the challenges that David has sometimes thrown at me... is whether there will be a gap between technological unemployment and the arrival of superintelligence. I always assumed that there would be of some years, but actually it might be that you really do need superintelligence.

to be able to fulfill all the economic functions of humans. And so there's no gap between the two. Now, if that happens, then maybe we don't need to worry about it because once superintelligence arrives, we've got no control and no say in what happens anyway.

We become the chimpanzees in the picture and superintelligence makes all the decisions. To be honest, I believe that just with the AI models that we have right now, and the capabilities that we see coming with the next generation of models, that's probably enough to displace enough jobs that we are well into that transition period, even if super intelligence never showed up.

So I really think that facing the issues and the problems of that transition period is going to be the most important challenge for leaders over the next decade. That's interesting. It seems to me that, as we've said, there's going to be a lot of automation. And it's been going on since before the Industrial Revolution. Many of the jobs which people do now didn't exist 50 years ago. There will be this churn.

But people will go up the value curve. As a recent TV series says, one door shuts and another window opens. The current models we have, and even models which are quite significantly advanced on them, won't be good enough to do all the things that humans do. Humans have an understanding of the wants and needs of other humans, which machines won't have, perhaps until they become conscious or maybe until they become close to super intelligent.

They can operate as therapists even now, but they don't have a full mental model of the world in a physical sense and in a social sense. And that's going to be very hard to replicate. And you need that to do quite a lot of the jobs that people do and will do in the future, which is why I think, and I could be wrong, that there's going to be lots of jobs for humans until the day comes when there are no jobs for humans. That's why I think it'll be a phase change.

I do wonder about this question of whether AI models would be able to have this kind of human touch. I'm not sure that it will not happen. The success of visual models in the creative space and the musical space has really surprised us all. If you go back into the sci-fi depictions of AI in the past, This happened completely in the opposite order that we believed it would. We thought that AI would take over all the rational reasoning-based tasks first and then get to the creative stuff.

But in fact, we've seen the most displacement so far in the creative space. As you say, Callum, Humans do have certain attributes that are important to other humans. And I think that as this displacement rolls along, a lot of the roles that people will probably gravitate to and that we could provide for people will be human to human service roles. Maybe everyone can have a therapist or something, and maybe that would be good for humanity.

I think that in addition to the idea that we're going to have total abundance, that in itself could be another type of abundance. And I think it underscores that when we talk about humans having zero economic value. after the AIs displace us, those humans still have value, other types of value. Yeah. And I know that Professor Stuart Russell...

Last time I spoke to him about this, he thought that we would not actually have technological unemployment because there would be support rendering services that humans can do for each other, which machines will never be able to replicate. I actually personally think he's wrong about that.

I think that GPT-100 will be as good at giving advice to another human as I am. In fact, it'll be a hell of a lot better because it will know the outcome of every bit of advice giving that's ever happened and it won't forget any of them. Whereas I just have my very limited window on the world. For a while, they will lack the context. So it's going to take a while. But I don't see that model lasting. We can all be each other's nurses and therapists. It doesn't seem plausible to me.

This makes me think about one other angle to what we've been discussing that we haven't touched on, which is, yes, machines may be able to do all of these things for us, but I also worry that If humans are not doing some of these roles and doing their best to achieve high levels of performance in these roles, that's also a formula for losing control potentially to AI.

Another problem of the many problems we'll face is the problem of identifying genius in young humans and cultivating that genius and at least getting a certain segment of the society. to still be at that high intellect level in order for us to try to maintain a healthy relationship with our AGI or super intelligent neighbors. So I buy into that.

I have the vision that the AIs will be challenging us to do all kinds of things by ourselves. They'll be sending us out like Boy Scouts to create a fire in the forest without matches and without lighters. They'll be challenging us to work out square roots with pencil and paper. I have the vision that there will be still a vitality in humans. But if we create AIs that just make us soggy...

that will be a terrible outcome. The vision I have for the future is an AI that challenges us to go far beyond what we're able to do now. But I think you're right, Jeff, on what you said earlier, that even today's AIs probably will displace many more jobs than are currently the state. It just takes a while for people to figure out how to take advantage of new technology.

Eric Brynjolfsson and Andrew McAfee often talk about the slow adoption of electrical power, that electrical power didn't displace steam engines quickly. People had to figure out ways to plummet into their factories and realize that it wasn't simply a matter of duplicating previous processes. I think there's a lot of people struggling to use AI.

Today, because they've got the mindset of how things used to work, when they can adapt, then they'll figure out that many jobs are no longer needed at all. So I am with you that... Just on the basis of the AIs which are here, we've got to prepare for a bigger disruption that has already happened looking around the world.

Do you think there are some politicians, some countries that are more alert to this, that are more likely to be able to adapt and thrive? Or are we all in the same mess together with no clue how to move forward? I do think there are different levels of competence being shown in this space of potential regulation or national policy around AI. I live in Canada.

There was an announcement last week from our industry ministry. It was a very typical example of Canadian industrial policy. Basically, take this size of money, put it there. Take that size of money, put it there. but with no real strategy behind it. And I would contrast that with what we're seeing out of the United States now. In the United States, some of the most informed thinking...

around the future that AI is bringing us is actually coming out of their defense and intelligence establishment. There is a report that was presented to Congress that touched on a number of these topics. If you read the sections touching on AI, they are remarkably well informed. As technologists, we're often used to and expecting our leaders to have little clue what's going on.

and be far behind the curve. But I think that the American establishment is actually further along than many of us would think. Yeah, the American establishment is absolutely fascinating at the moment because it's obviously just been taken over by the mafia family. So it's an interesting state of flux. The West Coast of America is where most of the cutting edge AI comes from.

In Silicon Valley, and I imagine probably also in parts of Seattle, every conversation is about AI and every conversation is about what AI is going to do to us in the coming years. So it would be astonishing if they weren't coming up with some reasonable thinking about the things we've been discussing.

And some of that is going to leak back into the defense and general government establishment, just because those people, the defense people, go to Silicon Valley from time to time to talk to the people in the tech giants. And also the tech giants have got much better, much more active at lobbying Washington. So they spend time there as well. So it would be astonishing if America wasn't waking up a little bit more than other countries.

Because in other countries, there's possibly no need to at the moment, or at least no observed need to. But I'm not seeing any coordinated attempts to create a program. There ought to be a department for thinking about what AI will be like in 10 years' time, a department for thinking about how we deal with technological unemployment. Many years ago, or some years ago,

Stuart Russell, who I mentioned earlier, he suggested locking up a group of economists and science fiction writers into a room and not letting them out until they come up with the solution to technological unemployment. I thought that was a great idea. And I don't know why anybody hasn't done it yet. I mean, I don't know why no government has done that yet. But that's what we're for, I suppose, isn't it? I suppose so. Yeah, I think that would be a great idea.

There is certainly the talent and the interest out in the private sector. We're recording a podcast about AI. There are a lot of podcasts and a lot of newsletters now about AI. It's a real growth industry. So it would be smart, I think, to try to organize that effort more so. The leading AI labs in the U.S.

They certainly have close contacts inside the government. And I don't think we should leave it just to them to have that relationship with the government. There needs to be a broader engagement. specifically to try to potentially clean up some of the mess that will happen during that transition. What I see is the role of us as futurists isn't just to spread some ideas around.

It is to highlight the thinking that we discover, which actually is a cut above the rest. Because there are, as you said, a huge number of podcasts and YouTube channels about AI. And a lot of it is disconnected, I think, from either the radical possibilities ahead or from a sufficient understanding of real-world pressures, the human and political and economic side of what we're talking about.

That's what Colm and I tried to do in the show. We tried to give a voice to people like you, Jeff, who have got something interesting and credible to say. And we want to keep on highlighting the best thinking and telling people who are discovering this field and getting alarmed and wondering what can be done, here are some good sources. So I'm very glad to have your advice.

I don't know if you can point us at other people that you have found thoughtful in your own journey, the podcasts or newsletters that you read first in the morning. Well, I would say most of the podcasts and newsletters that I see right now are focused more on the nuts and bolts technical approaches to implementing AI inside companies.

That's also somewhat my day job, but I see a lot less people focusing on the societal impact. It seems to be a bit of an afterthought. And I think we've seen that movie once before. with the introduction of smartphones and social apps. Having a bit more foresight this time around for what is going to be a much bigger revolution to our reality, I think is something that we need to do.

Yeah, I think that's right. Jeff, I'm glad that you're doing your bit to contribute to the thinking that we need to have because we need a lot more of it. Thank you very much indeed for joining us on the London Futurist podcast. Thanks, Calvin and David, for having me. This was fun. It's been a real pleasure.

This transcript was generated by Metacast using AI and may contain inaccuracies. Learn more about transcripts.