Post-labour economics, with David Shapiro - podcast episode cover

Post-labour economics, with David Shapiro

Jan 23, 202543 minSeason 1Ep. 107
--:--
--:--
Listen in podcast apps:

Summary

David Shapiro discusses post-labor economics, exploring how automation and AI could displace human workers, requiring a new economic model. The conversation covers the challenges of integrating AI into society, the potential for redistribution of wealth, and the importance of maintaining economic agency. Shapiro also shares insights on investment-based futures and the role of blockchain technology.

Episode description

In this episode, we return to a theme which is likely to become increasingly central to public discussion in the months and years ahead. To use a term coined by this podcast’s cohost Calum Chace, this theme is the Economic Singularity, namely the potential all-round displacement of humans from the workforce by ever more capable automation. That leads to the question: what are our options for managing the transition of society to increasing technological unemployment and technological underemployment.

Our guest, who will be sharing his thinking on these questions, is the prolific writer and YouTuber David Shapiro. As well as keeping on top of fast-changing news about innovations in AI, David has been developing a set of ideas he calls post-labour economics – how an economy might continue to function even if humans can no longer gain financial rewards in direct return for their labour.

Selected follow-ups:


Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

Promoguy Talk Pills
Agency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...

Listen on: Apple Podcasts   Spotify

Transcript

Welcome back to the London Futurist podcast. In this episode, we return to a theme which is likely to become increasingly central to public discussion in the months and years ahead, to use a term coined by this podcast's co-host, Callum Chase. This theme is the economic singularity, namely the potential all-round displacement of humans from the workforce by ever more capable automation. That leads to the question...

What are our options for managing the transition of society to increasing technological unemployment and technological underemployment? Our guest... who will be sharing his thinking on these questions, is the prolific writer and YouTuber, David Shapiro. As well as keeping on top of fast-changing news about innovations in AI, David has been developing a set of ideas he calls post-labor economics. How an economy might continue to function even if humans can no longer gain financial rewards.

in direct return for their labour. David, welcome to the London Futurist podcast. Thank you so much for having me. It's good to be here. Thanks for joining us, David. Before we turn to the theme of post-labor economics, what should our listeners know about you, David? Number one thing is just a little bit about my background. I spent two decades in technology.

cloud automation. Near the end of my career, I started adding more automation, machine learning, that sort of thing to my toolkit. Then with the advent of generative AI, I completely pivoted. This was about four or five years ago. to focus on alignment research, automation, cognitive architectures, what we now call agentic AI. And that's actually why I started my YouTube channel. I've realized that I needed a platform to share all of my work.

Over time, as people came on board with, okay, we're building increasingly intelligent machines, then people would ask, well, how do we align those machines? Or how do we align humanity? That just sent me down one rabbit hole after another. looking at history, philosophy, ethics, economics, geopolitics. And now I've arrived at my post-labor economic theory as one of the primary pillars of my platform.

If we talk about post-labor economics, it implies that we won't be able to continue selling our labor. But not everybody believes that. Some people point to past history. That technology took some jobs away, but it created plenty of others. What do you have to say to people with that view? That is the lump of labor fallacy.

which is the idea that there is a finite amount of labor to do. And once you automate it all, everyone can go home for the day. So it's almost like an inverse of the lump of labor fallacy that I'm talking about. Because yes, historically speaking, Technology has not necessarily created new jobs, but rather it created an abundance which allowed you to allocate resources to other demands.

Another general principle that economists have come up with over the years is that human demands are infinite. So the first principle or the originating principle is that humans always want more. If you have one house, you want a bigger house. If you've got a nice car, you want a nicer car. If you have a boat, you want a yacht. But when you assume that human demands, that consumption is infinite, there's no law that says those demands must be met by other humans.

It's just that right now, the best way to meet those demands is through goods and services ultimately provided by other humans. But there's no law of physics or no law of economics that those goods or services must be rendered or that humans must participate in the provision of those goods and services. So the fundamental assumption of economics breaks down if that comes to pass.

We can talk about what the barriers or what the thresholds to get to that point would be, because the assumption, the foregone conclusion is eventually automation through robots and AI will subsume labor. As you mentioned, some people still don't believe that. or they debate it. When you talk about thresholds and barriers, I think we should look at them a little bit because many people say, yeah, in principle, eventually this is going to happen, but it's not going to happen anytime soon.

And so it's a bit of an academic discussion, this post-labor economic stuff. The litmus test that I have created is better, faster, cheaper, safer. historically speaking, whenever a technology surpasses humans by generally even just two of those metrics, but ideally three or four of those metrics,

It becomes not only economically inevitable that the technology replaces human labor, but often it becomes an ethical imperative as well. For instance, when a job is too dangerous for humans to do, eventually we replace it with machines. Here's an example. Self-driving cars are still being developed. However, already...

The incidence rate of mistakes of full self-driving cars is something like five times less than human drivers. Now, they make different mistakes than humans, but you get five times the number of rider miles per collision. You extrapolate that out. What happens when those cars are 10 times safer, 100 times safer than humans? If that trend continues, it might even be a legal imperative to say maybe human driving shouldn't even be allowed. So better, faster, cheaper, and safer.

It's the same reason that we use tractors instead of ox. They're better, faster, and stronger than the alternative. So if we see this trend continue where... For any kind of cognitive labor, any kind of manual labor that requires high dexterity or high friction environment, such as building houses, building houses in construction works very dangerous. It's also expensive. Fully a third of the cost of building a house is in human labor.

So the economically rational choice is once you have an alternative that is superior to humans, again, better, faster, cheaper, safer. not only from a competitive perspective should you adopt that new technology, you might ultimately be legally required to adopt that new technology. Same reason that we had to move to electronic health records. That makes perfect sense. I agree with everything you've said, really.

Self-driving cars is a very interesting case because I thought that when they became ready for prime time, they would spread out around the world pretty fast because, as you say, they're safer and they involve fewer humans dying, which is generally thought of being a good thing.

And I also thought it would be the thing that would wake everybody up and make everybody realize, wow, technological unemployment really is coming. It's actually done neither of those things. It's disseminating really, really slowly. And most people are busy ignoring them. Waymo now gives more rides in totally self-driving cars with nobody sitting in front than Lyft does, which is the second biggest ride-hailing service.

Yet they're not going to offer self-driving taxi rides, as I understand it, in their next city, which is Miami, until 2026. So they're not going to start in any new city this year. That's really slowed. Now, I think it's very laudable for Waymo to be cautious and to be careful. And after all, we don't mind being run over by humans, but we really object to being run over by robots. So it'd be easy to derail the progress. That's really surprisingly slow.

Why do you think it is so slow as an automation engineer? I know that the technical limitations are never the ultimate bottleneck. It's always getting approval. whether it's getting approval from your boss, from your manager, from your director, or in this case, from voters. The term here is integrating into society. Here's another example, the legalization of psychedelics and marijuana, just as a parallel track.

There is overwhelming evidence that marijuana is better for society than alcohol, yet in many nations, it's still illegal. And there's a lot of historical reason for that. Likewise, there's a preponderance of evidence that psychedelics are a remarkable medicine that can be used to treat anxiety, depression, PTSD. That is not debatable, but it's still not legal everywhere.

The question is, how do you integrate these substances with society? In the case of artificial intelligence, how do you integrate this new technology into society? When you look at history, for example, the history of the automobile, A lot of the infrastructure and legal apparatus that was required to successfully integrate automobiles into society were not obvious. Traffic signs, traffic lights, that's not an obvious technology. That's a downstream requirement. Traffic laws.

seatbelts. Seatbelts weren't introduced broadly until, what, the 50s or 60s by Volvo? And even then, the company itself resisted putting them in because that was a tacit admission that these things are dangerous. But now it's, of course, standard practice.

And then drink driving laws, like making it illegal to operate a car while intoxicated. That didn't come, I think, until the 70s or 80s in some nations and states. So again, there's a lot of conversation that has to come around. And to your point. It's a known risk every time you get behind the wheel that you might drive off the road, you might hit someone in a head-on collision. Simply just being comfortable with understanding the risks and the kinds of mistakes that humans make.

Whereas these self-driving cars make fewer mistakes, but they make entirely different kinds of mistakes. There was an example, a self-driving car ran over someone, didn't understand that it had run over someone and kept driving, like ran over them twice. A human could make that mistake, but it's scarier when the machine makes the mistake. Who do you sue? Do you sue the manufacturer, the operator, who? And it's just alien. It gives us the heebie-jeebies, as we might say in America.

It hasn't crossed the uncanny valley. So there are delays. There are the delays of integrating technologies into society. There are delays in figuring out all the side issues, the equivalence of traffic signs and traffic lights and seatbelts. How long are these delays going to hold up the transition to?

the situation in which few people can earn a living by selling their labor. Is it going to hold it up for decades, or are we talking about a transition sometime in the 2020s and 2030s? Historically speaking, And what I will caution is history often repeats itself, but more often than not, history just rhymes with itself.

Historically speaking, whenever a brand new technology has been introduced, at least in the last century, it usually takes about one to two decades for full integration. In the past, it was much longer. It took a full century for the steam engine to really be. successfully commercialized and deployed. We don't have a century this time around just by virtue of all the other technological infrastructure. The internet allows for a far faster distribution.

Global supply chains allow for far faster distribution. And legal frameworks aside, political willpower aside, there's also a tremendous amount of skepticism in businesses. AI is still an unproven technology to some, and to others, it's a game changer. When I started my technology career, cloud virtualization was a pipe dream. Nobody believed in it. Now, everyone does it.

Amazon, Google, Microsoft, that is their flagship service offering. I knew it back in 2007 because I was at the front leading edge of that technology. And I said, this is very obviously the way of the future. It took 10 years before the rest of the world caught on to the fact, but now it's a foregone conclusion.

Likewise, I think we're kind of at that same inflection point where the leading experts are saying, okay, this is very obviously the way of the future. Now it's a matter of building consensus and establishing best practices. We'll be right back after a quick break.

If you spend way too much time online, first of all, same. And second, you need promo guy talk pills. I'm Rawal, and with Alex, we break down everything happening in tech, digital marketing, and the internet's latest chaos. No boring corporate talk, just real discussions with real... So go on, press play. You know you want to. So you think maybe a decade until the economic singularity, maybe a decade until, say, 90% of people are unemployable? 2035, that sounds pretty reasonable.

When you think about everything that is happening alongside this, first and foremost, artificial intelligence models are very intelligent already. Robotics companies are making huge strides, and it's not just one. There's no first mover in the robotics space. There are half a dozen American companies, plus at least two or three Chinese companies, all leading the charge in robotics. And that is going to have a huge knock-on effect.

Just the first order effects of dislocating basic labor, but then the second order effects of increasing the intelligence of these robots. Because remember, you've already got chat GPT in your pocket that has PhD level knowledge on many topics. Well, now it's going to be in every robot. And the current forecast that some forecasters are looking at.

is one robot to every three humans or vice versa three robots to every one human in 10 years the economic impact of that is difficult to imagine so is it your view that Pretty much everybody. I mean, everybody except a tiny number of people will be unemployable. Or do you think there'll be most people unemployable and there will be jobs for others? I actually had a really hard time figuring this out.

The original approach that I took was looking at supply side. Labor supply is saying, OK, there's a pool of people and entities that are capable of providing certain tasks, goods and services. And what I realized after doing a lot of thinking and reading and talking with real human economists and also talking to AI is realizing that looking at the supply side doesn't really make a lot of sense because the assumption is that before too long.

There's not going to be any task that a human can intrinsically do better than a machine. But that's only half of the equation. The other half of the equation is looking at the demand side, which is in what cases will a human pay? For another human, in which situations will you pay a premium for the privilege of interacting with a human or getting a human to provide you a good or a service? If you go to concerts or sports games?

or you watch YouTube, you watch your favorite influencers. There's plenty of conditions in which case human preference is, I don't care if a machine can do it. I still want a human in front of me. And so by looking at the demand side, I think that will provide a better understanding as to what kinds of jobs are likely to persist.

For instance, even us podcasters, humans just like hearing the opinions of other humans. We're biologically hardwired for that. This could be just a coping mechanism on my part, but I think that us three here are safe, at least for now. That's nice to hear.

I'm not entirely sure I agree, but it's nice to hear. So your view then is that there will be jobs, not just work, but paying jobs, perhaps even jobs that pay enough to live on for the foreseeable future. Yeah, certainly some are going to go away very quickly.

And one thing that I will point out is that employment, as we understand it, I don't think will be the primary driver of the economy in the long run. There's a book that I'm working on with my co-author and co-host, Julia McCoy, called The Great Decoupling.

The primary thesis is labor and business are about to get a divorce. We've been in a codependent marriage for the last several thousand years, and nobody's really happy in this arrangement. Businesses don't like how expensive and needy and demanding human workers are.

And human workers don't like how exploitative and callous businesses are. It's not working for anyone. We all want to go our separate ways. Okay, so if you imagine that businesses have this economic imperative to replace us with AI and robots. Great. But then the next problem becomes, well, if we lose our jobs, the fundamental assumption of politics and economics is that you have labor rights. You have the right to exchange your labor for wages, and that gives you the ability to demand.

through your wages, goods and services, and that keeps the economy going. So then you end up with this deflationary spiral of goods and services become cheaper, but nobody has any money to pay for them. So what happens to the economy? There's several layers to look at.

You can look at it on an individual basis, which is assuming that I need money in the future. Where do I get the money from? That is the primary social contract that I think will disintegrate. And one other thing that I want to say is. From a first principles perspective, the idea that we'll be able to provide goods and services more abundantly and much cheaper, that's a good thing, right? AI and robots will make everything cheaper, more abundant, that sort of thing.

So the only real problem we have to figure out is that new social contract. Yeah. That new civilizational operating system is what I'm calling it now, which is like, okay, if we lose the labor angle. There's got to be some other way to use the market to efficiently allocate resources. Yeah. The primacy of the market is something that the West has really figured out over the last couple centuries, but...

If the labor market itself goes away, we need another kind of market mechanism. And I haven't figured that out yet. Exactly. Well, yeah, the distribution question is the big question. And you're absolutely right. The market is a tremendously efficient mechanism for allocating any kind of scarce resources. And there will always be scarce resources. But before we go on to the distribution question, I just want to home in again on this idea that...

There will be jobs left for humans because humans will be happy to pay for other humans to do things ad infinitum. I think that's true, but I think it will be star economy type jobs. You know, most podcasters don't make any money. There's an enormous crowd of podcasters, and a very small number of them make huge amounts of money, and almost everybody else makes no money at all. The same is true in music, true in books. It's true in most creative endeavours, actually.

And I suspect those are the kind of jobs that will remain. Things like human butlers, where in a sense, perhaps we could all be each other's butlers. But I just don't see that happening in a world where a machine can do a better job than the human butler.

Some people might pay in order to have a human bowing and scraping. Some rampant egotists might like that. But will they have enough money to do it? Will it be a crazy thing to do? And will there be that many people who want to do it? I expect the jobs that are left...

There undoubtedly would be some. There'll be jobs for people who make ultimate decisions at the top of organisations or the top of countries or the top of the species. We probably want humans to make the top decisions about where we're going, what the strategy is and so on. and some of these star economy type jobs. But I think that will leave almost everybody unemployable. I would tend to agree. The jobs that are left are not going to be enough to run the whole economy. Right.

The great decoupling is that GDP will be uncorrelated with human labor. Yeah. Where capital subsumes all labor. And I do tend to agree that celebrity culture, that's not really going anywhere. some of the jobs that might have more dignity are going to be a little bit more local. So for instance, your local coffee shop, like, yes, the local coffee shop, the mom and pop bake shop that you love to go to, you will probably.

choose to patronize a local shop that has that human element, even if they have robots helping in the back because you still want to get your coffee and then go talk to Sue at the newsstand or whatever. There's always going to be some element of human preference. Now, to your point, though, in many cases, the rational economic choice is to say, well, the machine can still do it much cheaper at a certain point.

And while some of these kinds of more traditional human to human interactions will still have a huge amount of value, I wouldn't count on that. I think it's kind of the takeaway. Yeah. Don't put all your eggs in that basket. Will we have money in the future? Will we need money in the future? Or will we just be able to enjoy an abundance of everything? Money is one of the most useful inventions that we've ever come up with.

then I don't think it's going away anytime soon. As long as there's limited property in Malibu, granted Malibu is on fire right now, but as long as there's limited desirable property. There's a finite number of super yachts. There's a finite number of beachfront properties. There's always a need for what money provides, which is a storehold of wealth, medium of exchange, and unit of account.

One of the most common questions I get is, why don't we switch to what's called a resource-based economy, which is basically the idea that we should get rid of money and switch back to barter. And I know that that's kind of a condescending paraphrasing of the RBE idea. but it's because they don't take RBE very seriously. Some people say like money is the root of all evil, but it serves a very important social function and coordination function.

Part of the reason that it's gotten confusing, I think, is because David Graeber, an anthropologist, popularized the idea that the purpose of money was to serve as debt. That's kind of a harebrained idea. It's not very well supported. It's not accepted by actual economists, for instance, which David Graber is not an economist. And there's also plenty of evidence that money predates the concept of debt.

sometimes by thousands of years, if not tens of thousands of years. Just because money has been useful to servicing debt doesn't mean that that's its primary purpose. Again, the definition of money is unit of account, medium of exchange, storehold of wealth. There's a book that I was just reading called The Bitcoin Standard, which proposes that perhaps Bitcoin might be a great replacement for gold as the global reserve currency. But yeah, money's not going anywhere. I think that's right.

Barter is incredibly inefficient. If I've got a chicken and you want a potato, we've got nothing to work with. Whereas we can both turn those things into money and then we can do a deal. Yep. So we should get to this question about... How the abundance, because I agree with you that in a future economy where AI has replaced human jobs for most people, it's going to be creating massive amounts of wealth. So there's massive amounts of wealth sitting around being generated.

And surely we should all be able to share in it and have a wonderful, abundant world. And how do we do that? The obvious way, which is repugnant to many people, is, well, you just sort of have a government step in and share it out. So you tax people who are generating the wealth and you give the wealth out to everybody else. Actually, I can't get away from the fact that some variant of that is what we're going to have to do.

I'm a capitalist. I don't think it's a terrific idea to have governments determining what we all do all the time. Governments have a role, but not to run every aspect of the economy because nobody can do that. So some form of... not universal basic income, but progressive, comfortable income perhaps, is what we're going to have to have. We're going to have to have a taxation on the entities or the individuals who own everything.

You could easily have a world where a small number of people own a vast amount of the world economy, and those people need to be taxed to provide the resources for everybody else, because otherwise everybody else dies, which isn't good for anybody. Yeah. The metaphor that I've started using is some sort of redistributive policy would be tantamount to alimony after the divorce. It's not meant to be forever. As I mentioned earlier, we should find market-based solutions. And yes.

If everything that I think will happen, and of course it remains to be seen, we haven't seen unemployment spiking yet. It could be a few years off. It might never happen. All of the things that we're speculating here might never come to pass. I would be shocked if we don't see a spike in unemployment. But assuming that we do, we will probably see governments enacting stopgap measures. Even ancient Rome had the grain dole. They figured out that when people can't work, they get angry.

And I have what I call my theory of economic agency, which is that in many cases, political willpower is inversely correlated to the economic agency of your citizens. The less economic agency they have, the more angry they get. That's why the French launched the French Revolution. It's why the Arab Spring happened. It's why the Bolshevik Revolution happened. We've seen this pattern play out again and again, particularly in the West and European history. And to take a quick step back.

The definition of economic agency is your ability to influence your economic destiny through property rights, job rights, the ability to vote, those sorts of things. So authoritarian regimes, communist regimes, those types of regimes tend to keep people in a very low state of economic agency. And what we're seeing right now is technology and automation are already eroding the economic agency of people in the West.

As automation technology started taking off in the 90s, real wage growth slowed down even further than they already had started slowing down since the 70s due to other economic changes. If that trend continues, and by the way, this is not theoretical.

We have the data going back 40 years showing that economic agency of workers has been slowly declining in the West. And that's not to say that technology is the only reason. There's a lot of institutional reasons, for instance, all the neoliberal policies. If we can't earn money from selling our labor, how else can we have economic agency? My current hypothesis is that we'll move to an investment-based future. What I mean by that is...

Let's just assume that everything is going the way that we think that it's going. And that is that capital subsumes labor. And what that means is that the value of your labor basically drops to zero. The only thing that really matters in the future is capital.

Who owns the robots? Who owns the AI? Who owns the data centers? Now, obviously, there's plenty of other productive assets. Land will still be valuable. There's plenty of other things that are going to be valuable. But unless you have a piece of that pie. you don't have a way to participate in the economy. So one of the ways that I think that we can shore up economic agency is by basically bringing more things into new kinds of markets.

This is where technologies like blockchain, decentralized autonomous organizations, and those sorts of new potential investment vehicles might create a far more inclusive market economy in the future. Here's one way of thinking about it.

The vast majority of businesses that exist, you can't invest in. There's only 8,000 companies that you can invest in on the New York Stock Exchange. Now, granted, something like 70 to 90% of the wealth in the world is represented on stock exchanges, but what about the other 10 to 30%?

A lot of that is stuff that you just can't invest in. You can't invest in your local bakery. You can't invest in your local forest or your local data center or your local solar farms. One thing that I think is that if we use these new technologies... Also if AI is intelligent enough to dislocate jobs, it's also intelligent enough to run these resources.

So then you allow people to buy and manage these resources in a decentralized manner. And then you have a market-based solution that allows for the collective social brain, the hive mind of humanity. to efficiently run all these resources rather than using the government to run everything. I don't think that nationalizing resources is the way to go. That sounds as if you want capital to replace income.

for everybody which is fine except capital tends to get lumpy it tends to be owned disproportionately by a small number of people and you could see an economy which is producing vast amounts of wealth. So the economy per se is much, much richer than any economy now. But most of that wealth is likely to be in a small number of hands because that's the way capital tends to work. What's been the great saviour?

What confounded Marx, really, was that actually everybody had a pretty decent income. I mean, not everybody, but most people. Still, a very small number of people own most of the capital. Everybody had a good enough level of income that we never got anywhere near major revolutionary pressure in the West after Marx. So he was wrong about that. In a future world where nobody can make an income or almost nobody can make an income and we're relying on everybody having capital.

If capital carries on behaving the way it has done and it sticks in the hands of a small number of people, that's going to be a grim future for most people. And it's going to lead us back to where I started, which is that some form of redistribution is going to be necessary.

My great hope, the reason why I think that fully automated luxury capitalism is a possibility, is that the amount of income you would need to have a really good standard of living, say a middle class American standard of living. wouldn't be all that high. Because if you take the humans out of the production process, they're between a third and a half of most of it. If you use AI to make everything very efficient, and energy is virtually free, it's too cheap to meter.

then the cost of producing the really good standard of living is very low. And therefore, the cost of taxing the people who have the assets to get that low cost but high quality standard of living, that's not onerous.

That seems to me to be our best hope, that we will have to have a distributive economy, but it won't be expensive. It won't be an expensive burden on the people who own the asset. We probably will still have a position where there is massive inequality, much greater inequality than there is now.

But it won't worry people too much because everybody will be living a great standard of living. Now, I can see that might come across as panglossian and ridiculous. But I don't see the alternative. I don't see how else you get from... a world with lots and lots of abundance, but most of it being owned by a small number of people, how you avoid the awful situation where most people die. Right. So the good news is the world has gone through that.

transition several times and also taking a step back, the principle that capital concentrates is underpinned by a larger principle, which is that power concentrates. Power begets more power, which allows you to tip the scales further in your favor and you can create unsustainable systems. The Roman Empire is a prime example.

wealthy Roman landowners ended up owning all the farmland. And so the Roman citizens ended up with really garbage land that was not desirable. And that was one of the reasons that people stopped investing in the empire and stopped believing in the empire. And it fragmented into a decentralized format. Feudalism was kind of what eventually rose out of that. Russian aristocracy, same problem.

The absolutist monarchies of Europe of the 17th through early 20th centuries all eventually disintegrated in part due to... technological changes and the lack of incentives that the powers that be had to adapt. And what has typically happened in the long run is that, again, economic agency gets low enough and then people get angry enough and things start breaking.

Hopefully we don't have to get to that point. And one of the things that I think can help rebalance power, either before or after people get angry enough, are some of those new technologies like blockchain that allow for... Radical transparency, accountability, and public records, specifically property rights, that might not even be mediated by government. One of the promises that communism had was a stateless society.

I'm not advocating for a stateless society, but certain technologies could allow us to take some more power back from not just those who have currently a concentration of power and wealth. but also power back from the state. Particularly, I think decentralizing property records is going to be one of the key things. Then you look at the possibility of transparency and accountability. that a blockchain-based economy or governance system could allow? Brief thought experiment.

Imagine that we do end up with a government and economy that is largely centered on, or at least empowered by blockchain technology, which is a consensus-based technology. Everyone says, okay, there's a maximum wealth that we want to allow. We just say anyone over a billion dollars equivalent, 50% of everything they make after that is taxed and redistributed. That becomes algorithmically enforced without politicians.

That becomes smart contract. That becomes AI enforced. That is not something that the power brokers, that the power elite have any real agency over. However, what I will say is just because we can imagine a technological solution doesn't mean that we can easily get from here to there. But... When you look at the impact that particularly information technologies have had on societies, starting with the printing press and the radio, newspaper, internet.

Information technologies tend to radically reshape politics over time. It's difficult to really predict how the confluence of quantum computing, blockchain technology, and artificial intelligence combined with robotics and everything else that's coming. Just the first order effects are difficult enough to predict from one technology, let alone three or five, and then second and third order impacts. But the possibilities are endless there.

I think blockchain may well have an interesting and important role. I don't think it's going to solve the distribution issue. It may facilitate whatever mechanism people use. I suspect that the thing that will solve the distribution issue is the blunt fact that people will realize, oops, everybody's going to become unemployed in the next one, two, three years. And oops, that means everybody dies.

unless we find a new system. Your suggestion that you're allowed to own up to a billion dollars and beyond that you get taxed at 50%, that's the kind of thing that would do it. That would easily generate enough income to some redistributive. entity in the center, whether it's a state or a blockchain-mediated collective or whatever it is. Whatever it happens to be. That would generate easily enough assets to give a really good standalone to everybody. There's going to be an enormous change.

Imagine certain very famous billionaires at the moment, the reaction they might have to being told, you can have a billion and after that we tax you 50%. I think they'd go bonkers. But we're going to have to make a decision something like that.

And humanity is quite good at reinventing itself on the turn of a sixpence when it really has to. And so I imagine we probably will. It's just that it will be very bumpy. And it will be particularly bumpy if we haven't given the matter much thought before we get there.

And right now, we're not giving the matter much thought. This conversation we're having, I don't think very many people have this detailed conversation about what we do if and when technological unemployment arrives. And it might arrive in 10 years. It might be 20, it might be 100, but it might be in 10.

And nobody's thinking about it. I'm just endlessly amazed by that. The good news is my YouTube audience are very concerned about this. And actually, I've gotten a huge number of ideas just from the questions that they've asked. My channel has 160,000 subscribers and 10 million plus views. Now, granted, that's still less than 1% of the entire population of the world, less than 1% of America alone. But at the same time...

People like us, we are listened to by universities, by students, by people with influence. And it does take time to build consensus. Here's an example of how consensus building happens over time. The history of neoliberalism started in the 30s. Some of the ideas were refined in the 40s. And then it wasn't until the 50s and 60s that they said, OK, well.

Let's actually just train the next generation of economists with these principles of neoliberalism so that come the 70s and 80s, it became the de facto economic doctrine. I personally don't think that we have that much time. At the same time, we do have exponential technologies that they didn't have access to. We have internet, we have email, we have Substack and YouTube and Twitter and Apple Podcasts.

We have all kinds of ways to distribute these ideas. With that being said, there is still a tremendous amount of work to do to refine these ideas, to create new narratives, new packaging, to make them understandable. That's one of the reasons that we came up with that metaphor of the great decoupling. It's a divorce. It's like, okay, divorce is traumatic. The only people who win are the lawyers. And then afterwards, how is life going to be? If you get a divorce from business.

If labor gets a divorce from business, how are you going to take care of yourself? What are you going to do with your time? But anyone who's been through a divorce, you might know that it is painful. It is difficult. But on the other side, both parties often end up happier. Not always.

But it certainly can work out that way. Business gets to go off into the sunset with its hot new girlfriend, the AI and the robots that don't complain and you don't need HR anymore. They don't sue you. And then meanwhile, labor is liberated to say, you know what?

I don't feel like working today. I want to go down to the river or I want to play with my dog or I want to play with my kids. And I think that once people see that possibility, there will be, I hope, a lot more political willpower to make some of these changes happen. So what's going to be funding our future lifestyle is an alimony payment. For a little while. Until we come up with a better market solution. But yeah.

I think we're owed some alimony, honestly. The amount of value that many of us have generated for the companies that we've worked for. I certainly didn't get paid the millions of dollars that I saved my companies through my automation work and everything. I got paid pennies on the dollar that I generated for them. So I kind of feel like if the universe has any sense of justice, that we're all probably owed a little bit.

Now, of course, wish in one hand, as they say. So what's the feedback been? You say you've got 160,000 subscribers. Are people buying into your theories? Are there any economists or political advisors of note who are joining the conversation? Yeah. There's a professor that I'm good friends with here at Duke University. We're talking quite a lot about the impact that AI is going to have on business. I also collaborate with some professors at Clemson University.

But by and large, the response is, I'll say mixed, but it's often the loudest dogs that bark or the most anxious. Some people, and when I say some, I mean it's a minority. It's a very small percentage, really just want to maintain the status quo. They just want to keep working. Don't upset the apple cart. Just keep things as they are. And I'm like, buddy, that's not happening. Put it this way. Go back 200 years and explain to a farmer what a computer programmer did.

Just the number of iterations of like, okay, this is how the future is going to work. They wouldn't have even been able to understand what you're talking about. First, they don't know electricity and then digital communication and writing computer. There's so much that you'd have to unpack to get there.

This is a cognitive bias called normalcy bias. So normalcy bias is throughout most of human evolution, we have been local and geometric. We don't really have an intuition on thinking global and exponential. Unfortunately, we are in the middle of a global and exponential change, which means that most people are not I don't even say cognitively equipped, but not even emotionally equipped to really wrap their heads around what's happening.

It's all about, okay, what do I see in my real life? And when you take a step out of this podcast and you go to the store, there's still a teenager checking you out at the cash register. There's still a gas station attendant. managing the gas station. So there's a lot of cognitive dissonance between people that are working at the cutting edge, seeing what's coming, futurist podcast versus what people are experiencing in real life.

Once people start experiencing the reality, if unemployment starts, then it becomes real to people and then the conversation starts. Fortunately, we're already doing the legwork by having these conversations and planting those seeds. It'll take a while for those seeds to come to fruition, but I'm personally not particularly worried. That's interesting. I think it is quite likely to be very bumpy.

As somebody who's had the misfortune of going through a divorce, I'm not wild about that metaphor, but I do get the point. My final question for you, David, is a challenge that David often puts to me, which is, are we going to get the technological singularity before we get the economic singularity?

In other words, is the end of employment for humans, most humans, is that an AI complete situation? Do you need superintelligence before AI is smart enough to be able to replace humans in almost all jobs? to make it virtually impossible for humans to earn money in a job? I guess it really depends on definitions. For me, the technological singularity is what ray kurtzweil laid out which is you get compounding returns until your desktop computer is smarter than all of humanity combined

That's scheduled for around 2044, 2045. So we got about 20 years until your desktop computer can run the whole planet on its own anyways. Whatever happens, I suspect that the threshold to economic singularity is probably lower than that level of intelligence. Because we humans are not particularly bright or clever on our best days, certainly not on our worst days. And the economy is still running well enough. We're not necessarily doing the best job of being custodians of the planet.

But I suspect that by 2030, we're probably going to see some major changes. And as we discussed at the beginning, 2035 seems like when these technologies are going to be pretty deeply integrated into society. And that's still 10 years before. The technological singularity really takes off. So if I had to guess, I would say the economic singularity happens five to 10 years before the technological singularity.

My final question is, where should people look for more information? And which other researchers have you actually been inspired by? So that others who are listening to this might be inspired by them too. I do most of my work on Substack today, so it's daveschapp.substack.com. I'm also somewhat active on Twitter, although Twitter does not really allow for deep intellectual discourse.

And then as far as other people, one of the books that I'm reading that's been phenomenal is Why Nations Fail. I just picked that one up. I recommend that to everyone. By Asimoglu and Robinson, that one? Correct, yes. The Bitcoin Standard is another one. So the Bitcoin standard is good. Changing World Order by Ray Dalio, another amazing book to understand these large scale debt cycles. And then Vulture Capitalism by Grace Blakely.

Those are top of mind books that I've been reading to try and wrap my head around everything that's coming, at least on the economic side. Well, that's lots of interesting things we'll put in the show notes. So I want to thank you for sharing your time with us. And maybe we'll bring you back to discuss psychedelic treatments as a totally separate discussion another day. Absolutely. Would love to come back anytime. Thanks for joining us, David. That was good fun.

This transcript was generated by Metacast using AI and may contain inaccuracies. Learn more about transcripts.