855: Exponential Views on AI and Humanity’s Greatest Challenges, with Azeem Azhar - podcast episode cover

855: Exponential Views on AI and Humanity’s Greatest Challenges, with Azeem Azhar

Jan 21, 20251 hr 28 min
--:--
--:--
Listen in podcast apps:

Episode description

How can we use AI to solve global problems like the environmental crisis, and how will future AI start to manage increasingly complex workflows? Famed futurist Azeem Azhar talks to Jon Krohn about the future of AI as a force for good, how we can stay mindful of an evolving job market, and Azeem’s favorite tools for automating his workflows. This episode is brought to you by ODSC, the Open Data Science Conference. Interested in sponsoring a SuperDataScience Podcast episode? Email [email protected] for sponsorship information. In this episode you will learn: (05:43) Azeem Azhar’s vision for AI’s future (14:16) How to prepare for technological shifts (20:35) How to be more like an AI-first company (38:46) The tools Azeem Azhar uses regularly (50:09) The benefits and risks of transitioning to renewable energy (1:09:28) Opportunities in the future workplace Additional materials: www.superdatascience.com/855

Transcript

This is episode number 855 with the famed futurist Azeem Azhar. Today's episode is brought to you by ODSC, the Open Data Science Conference. Welcome to the Super Data Science Podcast, the most listened to podcast in the data science industry.

Each week, we bring you fun and inspiring people and ideas exploring the cutting edge of machine learning, AI, and related technologies that are transforming our world for the better. I'm your host, John Krohn. Thanks for joining me today. And now, let's make the complex simple.

Welcome back to the Super Data Science Podcast. Today I'm over the moon to have the famed futurist Azeem Azhar joining me on the show. Azeem is creator of the invaluable Exponential View newsletter, which has over 100,000 subscribers. He hosts the Exponential View podcast, which has had amazing guests, including people like Tony Blair and Andrew Ng. He hosted the Bloomberg TV show Exponentially, where he had a guest like Sam Altman. He holds fellowships at Stanford University and Harvard Business School.

He was founder and CEO of Peer Index, a venture capital-backed machine learning startup that was acquired in 2014. And he holds an MA in PPE, which is Politics, Philosophy, and Economics from the University of Oxford. He also wrote the best-selling book, The Exponential Age. I will personally ship five physical copies of Azim Azhar's Exponential Age book to people who comment or reshare the LinkedIn post that I publish about Azim's episode from my personal LinkedIn account today.

Simply mention in your comment or reshare that you'd like the book. I'll hold a draw to select the five book winners next week so you have until Sunday, January 26th to get involved with this book contest. Today's episode should appeal to absolutely any listener. In today's episode, Azeem details the exponential forces that will overhaul society in the coming decades, why AI is essential for solving humanity's biggest challenges, he talks about his own cutting-edge personal use of AI agents, LLMs, and automation,

And he fills us in on why there's no solid ground in the future of work, but how we can nevertheless adapt to the coming changes. All right, you ready for this exponential episode? Let's go.

Azeem, welcome to the Super Data Science Podcast. It's surreal to have you on the show because I've been a huge fan of yours for nearly a decade now. I was a subscriber to your Exponential View newsletter nine years ago, and now it's got over 105,000 subscribers. Azeem, how are you doing today? I'm doing super well and excited to have one of the OGs with me. Thank you so much, John. Appreciate it. Nice. Where are you calling in from today, Azeem? I'm up in Hampstead in North London, which is where I...

live when I'm not on a plane visiting the US. Nice. Yeah. Also, in addition to the newsletter that you have, The Exponential View there, you also have a bestselling book called The Exponential Age. And you've built a whole brand out of the word exponential, such that you could even identify not just as a futurist, but as an exponentialist. Could you define for us what this exponentialism is?

and elaborate on how this perspective shapes your analysis of technological trends and societal changes. Yeah, absolutely. I mean, exponentialism is why we have data scientists. Exponential technologies are ones which get much, much better and much cheaper every year. The most important of those, of course, has been silicon chips. And with silicon chips, hard drives and data storage and bandwidth, cheaper and cheaper every year.

Because they get cheaper, they get more widely used in our economies and in our everyday lives. And the notion of an exponential technology didn't really make sense until the mid to late 70s. But now we're at this strange period where lots of technologies have that characteristic where they get better and better and better and faster and faster and faster. And the reason I say.

exponentialism is why we have data scientists is because we need data scientists because there is so much data. You know, on my home network, I send a terabyte of data around every couple of weeks because there are so many devices just talking to each other. And that comes as a result of exponential technologies. Yeah, it is wild. It's something that I talk about on the show a lot.

Things like dramatically cheaper compute, dramatically cheaper data storage, and exponentially more data being available across all those kinds of devices that you're describing, self-driving cars, in-home sensors, industrial sensors. For people like listeners to this show, this provides a wind at our back in terms of what...

the out-of-the-box foundation AI models we'll be able to use are, as well as the ones that we'll be able to fine-tune for our specific purposes. So yeah, it is a really exciting trend. And something that I have, if people have been listening to the show for years, they would have noticed that when I first started hosting the show four years ago, a question that I saved for my really big guests.

that I thought would have the most mind-blowing answers, I'd say to them, you know, we have this wind at our back. We have all these exponential factors. It seems like the world will be dramatically different in the coming decades. What kind of vision do you have for the world? And the crazy thing was, Azeem, that these people, some of the best-known names in data science and AI, they often had no answer at all. It was kind of, they...

It seems like, and this makes sense given their role, if they're an academic, they are thinking kind of one to two years in the future in terms of their next grant application. What do they need for that grant application? And so it's exciting for me to have you on the show because that is the kind of timeline that you're thinking about all the time. It's kind of 10 to 30 years in the future. So where do you think we're headed?

you know, is this going to be a horrible dystopia? Are we going to have a utopia for maybe many people on the planet or maybe even theoretically the whole planet in our lifetime? Yeah, where do you see these exponential trends taking us in the coming decades? You know, I think it's really hard to answer the question that you asked the academic data scientists, because it's just really hard to look out into that future and to...

extend the curve that we extend. And I think back to my last startup, which was a machine learning company. We looked at lots of data. We were acquired in 2014. When I hired my first data scientist in early 2010,

That job didn't really exist. There were a handful of people with that job title. We weren't sure what to call it. Do you call it computational statistics? Do you call it machine learning? Do you call it data cleaning? And if listeners go to Google Trends, Google News Trends, which I'd encourage them to do, and set it to worldwide and 2004 to present and type in the word data science, you will see that in July 2010, when I hired my first data scientist,

it says less than one. It's basically zero searches. And today it's 94. You know, read into that what you will. So it's really difficult, I think, to look forward that 10 or 15 years to say, you know, these things will be true and the world will be that different. And the way that I do this is I try to take people back to things that they know and recognize. I mean, do you remember the world when...

You know, you could hold an entire set of customer records in 100 megabytes of RAM. And the answer is, depending on how old you are, you might over my shoulder for people who are watching on the video. I've got my second computer, which has 32 kilobytes of RAM on it. So when you ask me the question, where are things going over 10 to 30 years? I think that there are certain technology processes that we can be.

have some confidence over. Those processes being that we will continue to develop, deliver more compute for lower cost, which will mean the amount of storage and data we generate will increase really, really significantly. And, you know, that we can draw those conclusions. I did a calculation looking at the total number of

I guess you could call them flops, computer instructions per second in the world. From 1972, which is generally where I start because it was about the time the Intel 4004 was released and the F14 Tomcat had the sort of first computer, sort of CPU, integrated CPU, and it's also the year I was born. And the amount of flops in the world.

has grown by about 65% a year compounded for 52 or 53 years. And the question to ask yourself is, does that continue or does it stop? And which is a more radical assumption? And I think it's more radical to assume it stops. And so when you ask me, where are we going to be in terms to 30 years? I have a very, very simple, we can call it heuristic.

which is let's kind of just draw the line up for a bit longer and use that as our baseline. And so what is 10 years? It's a 10 to the 6 increase in compute. It is a vast increase in bandwidth. It's a vast increase in the amount of data that we're generating. Where are we going to generate it from? Well, ask yourself the question today. If you're someone sitting listening to this.

podcast and you've got any number of Kafka queues and you've got Datadog running and there's billions of events coming through every hour, go and ask yourself, well, 15 years ago, what number were you dealing with per hour? It may have been 100. So you've already lived through it. And let's extend that out and recognize that that's where we will likely go. We can then add some criticality to the question afterwards, but I think that's a good place to start.

I think it's great. And there's no question for me, it's, you framed that actually, we had a pre-call before doing this interview. And you made one of those points, the same points there, which is that, what is the more likely thing that this curve that's been going on for decades is going to continue or that it's going to end? And of course, it seems more obvious that it's going to continue, even though it is that dramatic is 68%. It sounded like you said, increase year over year in flops. And people will say things like, oh, well, Moore's law is coming to an end because, you know,

electrons will start jumping from circuit to circuit if we try to make the gates any smaller on chips. But of course, that doesn't matter because the processes that are creating the chips will continue to get cheaper and we'll be able to have more and more operations happening on larger chips or you can have more and more GPUs running in parallel and come up with ways of having information.

flow quickly in parallel. So it's, you know, this trend won't end just because we can't continue to shrink. I completely agree with you. Now, there is a criticism, a challenge, which is...

It's 2025. The turkeys are going to feel the same way all the way up to the 26th of November 2025, which, as you know, is the day before Thanksgiving. And then the trend of being loved and looked after and fattened ends very abruptly. So, you know, of course, you know, maybe there is maybe that does exist. But but I think you put your absolute finger on it, which is that.

This isn't magic. This is a series of underlying processes. So we need to deliver processing more cheaply and we're struggling with quantum effects. So instead, we scale out and we parallelize. And then we have issues with interconnect between GPUs. So we build InfiniBand or whatever, you know, whatever it is to kind of increase the speed with which we move things across. And the question is, what drives that? And what drives it is.

human ingenuity and financial incentive and growing markets. And soon it will also be AI support to help us solve those problems more and more. Excited to announce, my friends, that the 10th annual ODSC East, the Open Data Science Conference East, the one conference you don't want to miss in 2025, is returning to Boston from May 13th to 15th. And I'll be there leading a hands-on workshop on agentic AI.

Plus, you can kickstart your learning tomorrow. Your ODSC East Pass includes the AI Builders Summit running from January 15th to February 6th, where you can dive into LLMs, RAG, and AI agents. No need to wait until May. No matter your skill level, ODSC East will help you gain the AI expertise to take your career to the next level. Don't miss the early bird discount and soon. Learn more at odsc.com slash Boston.

So following on from this idea of exponential growth, humans seem to be, and maybe turkeys as well, seem to be poor at being able to imagine that they're on this exponential curve. And so Ray Kurzweil, for example, another famous futurist, said that our intuition about the future is linear, but the reality of IT, as we've already been discussing in this episode, is exponential.

And you similarly, in your book, you talked about in chapter three, how, for example, the COVID pandemic, when that was unfurling in 2020 around the world, it was experiencing exponential growth. And I experienced that in real time, looking at, I was like many times every day, probably a hundred times a day, refreshing how much in New York state, how many more infections there were.

It was very difficult for me, even as somebody with a lot of statistical background, been a data scientist for a decade. Even for me, it was difficult to process how this exponential change was happening. Given the difficulties that even experts face in predicting exponential growth or being able to have intuitions about exponential growth, how can businesses, policymakers, our listeners

better prepare for future technological shifts. I agree. It's really difficult to normalize and rationalize in your head the speed of that change. I do think that...

It's quite commonplace. A very simple exponential process is compound interest. And virtually all of us start saving for our pensions or 401ks or whatever it happens to be too late. The right time to start is when you're 23 and you just put 10 bucks a month away, knowing it's going to compound. And I think many of us are guilty of that. I am as well. I think there are companies who have...

internalize this possibility. And I think the technology industry, as it comes out of the Bay Area, has very much done that. They have relied on understanding that Moore's law keeps driving prices down and that you aren't really going to...

systemically run out of capacity or compute. You may have crunch periods where you can't onboard the machines or the hard drives or the storage fast enough. But in general, you won't do that. So I mean, I think that one of the ways that you have to understand this is understand the processes and understand that these processes absolutely exist. And I think it's really unhelpful for...

when you're trying to make sense of this world for people to think in linear terms. And I still see it. And I'm sure you may see it when you're helping clients or people at work and you see their business plan and it shows a sort of a fixed increment of growth and nothing grows that way. Everything follows a phase of a logistic S-curve where you have an exponential phase that tails off. Nothing is linear except for our birthdays, one to two to three to four.

So I think a lot of the tools are to hand, but it is very difficult. And what you need to do at these moments is perhaps go back to first principles thinking and perhaps say, look, the heuristics we've used were just that. They were really helpful in a world that doesn't move as quickly. But in a world that moves this quickly, we have to go back to heuristics. Sorry, pardon me, first principle thinking. And the thing that's so funny, John, is that.

Most people who are listening to this podcast will have, beyond their experience with COVID, they will have lived through exponential technologies because they will have lived through upgrading their iPhone or their Android phone every two years and getting twice as much compute for the dollar they spend. They will have lived through, if they're data scientists, their data array or their data lake going from

a gigabyte to 100 gigabytes to 10 terabytes to a petabyte and beyond, right? They've literally witnessed it. And yet it still becomes quite difficult. I think going back to first principles is a really helpful way of doing that. Yeah, yeah, yeah. And so in terms of something that people could be doing, this idea of first principles in this instance here, so that's literally thinking about, you know, sketching for yourself.

those kinds of changes and thinking about how you adapted to those changes and making projections based on that? Yeah, I think that's a really good way of doing it. I mean, when I do my own planning and build models of where the business might go or where usage might go, and I've done this for more than 20 years, I've never put in

linear increases, like it'll go up by 20, it'll go up by 20. I've always gone in and put in a dynamic percentage because a percentage compounds. And if one of the things that drives these sort of exponentials is feedback loops. So the reason something accelerates, I mean, let's think about silicon chips, right? Why did chips during the 80s and the 90s and the 2000s

get better and faster it was because there was a feedback loop when intel came out with a new chip it allowed microsoft to deliver better tooling on windows which gave people an incentive to upgrade their computers which put money in the system which allowed intel to develop a new chip which allowed microsoft to push out more features and that feedback loop accelerates and so sometimes when i do my planning i will also try to put those types of feedback loops in

Because an outcome of a feedback loop will often be a curve that ultimately has that sort of that quality of taking off. And in a lot of places, you end up with these linear forecasts. And, you know, if you're sitting there and you're thinking, listen, I need to put in my budget request for next year for storage on S3. And I also need to give some indication of what's going to happen the year after and the year after that and the year after that.

If it's growing linearly, I think you'll make incredibly extreme assumptions based on what evidence has shown us. So you have to go back and start to say, how do I put in more realistic assumptions, even if it's going to freak the financial, the CFO out, because that's what history has shown us. One thing that's interesting that you mentioned there was corporations. So organizations like Intel and Microsoft, in the 80s and 90s, they were unquestionably at the forefront of hardware and software.

In recent years, something that you and I talked about in our pre-call was how AI adopting companies have had much faster growth, maybe even exponential growth in recent years relative to companies that haven't adopted AI. So I'd love to hear more about this exponential gap and the organizations that have been doing it right and everyone else who's being left behind.

we can do to close that gap? What kind of strategies we could employ as an organization, you know, our listeners' organizations can employ to try to catch up to or eventually get on the same trajectory as AI-first companies? Sure. Well, absolutely. I mean, I think the exponential gap is a really important concept. So we'll talk about it conceptually and then I'll try to get practical. So conceptually, it's just that the technology races away faster than the norms and the rules and the...

processes that we have can handle it. And I think the easiest example I can give is what are the rules about using a smartphone at dinner? Because when I grew up, there were no rules because there were no smartphones. And then 2007 smartphones arrived. And by 2014, parents are screaming at their children. And we're trying to make up the rules as we go along. That's the exponential gap, like in a really prosaic way that we all understand either as parents or as kids who've lived through it.

And what's happening with AI, I think, is is really an interesting example. So the data point you mentioned about AI companies growing faster is that if you look at fast growing software as a service or SaaS companies and you compare them to fast growing AI companies, SaaS companies took about 60 months.

to get to $30 million of annualized revenues, whereas AI companies are taking about 20 months to get there. They're going much, much faster. And you think about a company like Anthropic, which is not even the biggest AI foundation model company. It got to a billion dollars in sales last year in 2024, which I think is its third or fourth year of operations, which is pretty remarkable. And so I think that there is

a real opportunity when you sort of apply AI to absolutely drive and change your business. But let's be clear, right? These companies are, on the one hand, they're either making the tools, and so they're growing really quickly because everyone needs the tools. That's a case of OpenAI or Anthropic. Or on the other hand, they're brand new companies operating in

CRM or data cleaning or sales automation that have been built from scratch using AI and are just able to deliver better products because they have a better technology. So I think that kind of lays out the ground, but I'm not sure if I directly tackled the question you asked me, which was about the gap. So do you want to put it back to me? And I'll come back and try to answer that. So the idea is, is there anything we can do?

A lot of our listeners will come from, some of them probably do come from Anthropic, for example, but others will come from firms where you see that kind of exponential growth that these AI firms are having. I wonder if there's strategies that we can employ in our own organizations that would allow us...

to reap some of that exponential growth and kind of catch up to Anthropik in some ways on trajectory. Yeah, well, I wish we could all, at least, you know, if your business and mine can catch up with Anthropik, you and I can meet in the Maldives next Christmas. I think three years to a billion dollars in revenue. Yeah, that would be fun.

There's a lot we don't know about AI as a technology. In the same way, there was a lot that we didn't know about the internet back in the mid 90s, 93, 94, 95, when I started to work with it. The companies and the people who did well were the people who, by and large, started to get into it early. And we learned what good looked like and what good could be like. So one of the things that we...

know about AI is that unlike, say, the metaverse four years ago, it's a real, real thing. There's tons of evidence now that it is real and it's not going to disappear. But the other thing that's true is that it's changing really rapidly. And I'm sure that, you know, the listeners have been thinking, should I be using Claude Sonnet to support my coding or ChatGPT or the new O1 model or DeepSeq or where do I go?

It's changing so rapidly that the opportunity you have actually is to build the muscle of working with AI. And the muscle of working with AI is both what is it to have these cognitive assistants that are getting better and better at what they do, but it's also how do you build systems that adapt to a world where the underlying tools change so significantly. And it's also about how do you learn about

where they're going to be effective and where they're not? And finally, because they can be used in so many places, how do you prioritize? And the thing about that set of four questions is that it's not yet in an O'Reilly book. It may be one day, but it isn't yet. And so you have to learn that yourselves. And so the way that you as an individual or as a team leader start to close that gap is,

to start to learn and experiment and practice. And I think that there's a right space to how close you get to the bare technology where this matters. I think that if you always live up at the level of the finished package product, so you get workflows out of Salesforce, the amount of learning you do will be limited. Actually, Salesforce will do all the learning. I don't think you need to get down.

as deep as building your own foundation models. Because unless you've got 20 amazing scientists and $10 billion, you ain't going to get very far. Obviously, there are some exceptions to that. But the right spot is enough tools that are semi-finished or maybe are APIs that really allow you to build, play around, learn, develop your practice, and continually invest in that while you're delivering.

I love that answer. And later on in the episode, we're going to talk about the future of work and how our listeners can prepare for the future of work. And so we'll dig into that a bit more, this kind of idea of how the O'Reilly book doesn't exist yet for how we need to run our organizations in this AI world.

Yeah, so we'll get to that later on. But there are a couple, a few really great O'Reilly books about working with Gen AI and large language models, which I should just say that I've glanced at one of them because they're all available online quite often. And it was pretty, pretty impressive. So I don't want people to get the impression they shouldn't go to O'Reilly and, you know, take a play with these. There are some good books out there. Oh, no, for sure. Yeah. Oh, yeah. I mean, there are, I don't know the names of them offhand. I create lots of content for the O'Reilly platform.

host conferences there, do trainings there. And so I've also, I've come across lots of these books. There's tons of books designed for across the spectrum of users, you know, whether you're a hands-on practitioner like a software engineer or an AI engineer, you know, there's books for you, of course, and also from other publishers. And, you know, all the way through to, you know, click and point guides. You know, I'm sure there are whole books. You know, I don't know specifically that this is from O'Reilly, but

There's certainly books out there on prompt engineering, which also, interestingly, this is just a really quick aside. We don't need to spend much time talking about it. But something that seems obvious to me is when we were using GPT 3.5 in the original ChatGPT now more than two years ago, this kind of idea of prompt engineering and maybe even this supposedly even $400,000 a year job of prompt engineer that's perfect for a PhD in literature to take.

We don't hear as much about prompt engineering now, and it's because reinforcement learning from human feedback, companies like Anthropic and, of course, OpenAI as well have been so good at assimilating data and creating new data that allows these algorithms to just kind of do what you want without you needing to engineer the prompt. You're absolutely right. I mean, it's so fascinating to see how...

the better quality models that you see with Claude Sonnet 3.5 and the various ChatGPT examples go so much further even with minimal prompting. Although I would still say that you can do quite well if the prompts get...

if your prompts do get, do get better. So, but, but I just, I just think that you, you kind of get your 90% now without having to prompt engineer. And if you want 95, you have to do a little bit of that. And I do wonder what happened to that literature PhD, uh, who was a, hopefully they didn't take a mortgage out on that salary. Yeah. It's interesting to think kind of, uh, I was just trying to think exponentially there for a moment about how, you know, if the,

things like RLHF and the underlying LLMs assuming exponential improvements over the next 10 years, and just how much the model's outputs given a prompt have improved over the last two years, it's interesting to think you could wake up in the morning and say,

to your whatever. It's probably just, you know, in your home everywhere, listening to you and you say, I'd like to have a great day today. Right. Well, okay. I'm going to let you in on a secret because I already do that. And so I have, I mean, I use a load of these models and we can talk about, you know, why I use which one when, but I have, I use Claude on my phone quite regularly and I will drop my daughters to school. And on the way back, Claude has an audio mode.

And it'll take up to 10 minutes of audio. So I will hit the audio button and I'll drive off and I will download everything in my brain. And there's not much in my brain, so I can do it in 10 minutes. I will say, OK, Claude, I have to, you know, think about a speech I'm giving. Here are the ideas that I've got in my mind. Here's the audience. And oh, and that's all I've got to say about it for two minutes. And then I might say, OK, Claude, also, I've got to.

renew my car insurance. I've got to pay that parking ticket. I've got to do this. I've got to do that. And I'll keep talking and it'll be like health staff, home staff, cognitively demanding stuff like presentations and speeches. It'll be operational stuff like, you know, remind me to figure out what's happening with the Stripe refund, the issue that we've had. And then at the end, I'll say, right, Claude, I've given you this grab bag of things.

organize it sensibly for me. And it will go off and do that. And then when I get to my desk 15 minutes later, I'll just open the Claude app on my computer and I've got my to-do list that's been done. And, you know, for a lot of those tasks, it will have written the letter to appeal the parking fine. It will have ordered my thoughts for the presentation or the speech and explained where the gaps are. So I sort of do that. And it's good enough at 3.5 New Sonnet.

to make a big difference to my day and hopefully now to everyone who's listening to this show's day. I get that idea of dictating, but then how do you then have it flow forward into these kinds of things like reminders or individual tasks? Where does that surface? That's the gap at the moment. Right now, it's copy and paste. I will just copy these things and paste them across. There hasn't been

um a great um an example that i know that can grab the context of something as amorphous and confused as a speech through to a bunch of trivial to-dos and make sense of how to uh how to break them apart i mean what claude will do is it'll say okay then you have a list of to-dos and it will you know in markup it'll

put the double hashes and it'll be big and bold. And then there'll be a list of to-dos and bullet list. And it generally gets that right. But yeah, no, we're missing that step to action, which is what all the AI companies are promising this year. Yeah. And I'm sure it's something, you know, a company like Apple, which has been slow relatively, and that is kind of their MO. You know, they haven't been at the forefront of LLMs, but you can anticipate that

they have people that are working on integrating all of their applications and allowing those kinds of flows to start to happen, to be that middle layer in connect applications to allow us to have, okay, the emails are drafted, just press send when you've read them, and your to-do list is in your reminders app on your desktop and in your iPhone and so on.

Well, I think there's something that I wrote in my newsletter, which was about the way in which this ecosystem might expand. And it was about the fact that there would be lots of AI agents supporting us. So the idea of an AI agent is that it's an AI system that's more than one shot. It persists state and it can potentially do something useful at the end of it. And I use a few of these. I imagine we're going to have.

hundreds if not thousands of agents that are circling around us for us working on our behalf in the same way that we have hundreds of apps on our phone and within each app lots of functions and but then I think we will need a supervisor agent like a chief of staff that we can just blurt at because one of the things that I have always been terrible at because I'm super impatient is switching from

you know, app to app to app. So in the end, I still just keep everything in notebook, right? I know there's Notion and I know there's Trello and I know there's Jira and I know there's all these things. And I've designed my life to never have to actually be disciplined to use those because I find it frustrating. And I think that the thing that I would hope will end up happening is that we have our own personal supervisory AI system where we can be a bit unstructured. And then it figures out which agents,

to send the task to, because I don't want to switch context to say, now I'm doing a to-do list and previously I was brainstorming. I just want to move seamlessly. And I'm not sure that Apple will do that because it's never really been their MO. And so I'm expecting that somebody else might try and do that. Yeah, yeah, you could be right. And of course, Google with their Android phone and Chrome browsers.

they could be well positioned to be, yeah, maybe iPhone users in a few years will say, oh man, I'm going to have to get into the Android ecosystem because everything is just so well interconnected. And I have my AI agent chief of staff that can just quarterback, to use the American term, all the aspects. Are you using anything that you might call an AI agent? Yes. So the main thing,

that I've been using for that is you.com, Y-O-U. Yeah, Richard Socher's company, yeah. Richard Socher's company, exactly. And we had Richard's co-founder and CTO, Brian McCann, on the show a couple of months ago, episode 835. Wow, you've got a good memory. I've got the spreadsheet open and it happens to be just recent enough that I didn't even have to scroll or search within it.

And so, yeah, so you.com has some pretty cool functionality for allowing you to spin off research tasks. So it has a research mode that could run for 10, 15 minutes for a typical task. And it does a great job of kind of in a way that, so it sounds like you.

Claude 3.5 Sonnet is my preferred LLM at the time of recording. And I use it as kind of my default go-to because I'm more so than, you know, initially when ChatGPT first came out, that was kind of my go-to. And then Anthropic, just there's something, not only is the user experience to me kind of friendlier and warmer, but the outputs are just so often, I provide so little context and I'm blown away by how with just this,

this ugly pair of words that I throw in, it somehow knew exactly what I wanted and gives me the response. So it reduces a lot of effort for me in terms of prompting to go back there. But yeah, the disadvantage of a model like Claw 3.5 Sonnet is you're dependent on the model weights that are trained. You can't do real-time lookups of things. And so with U.com, that's been where I've been going to.

to be able to kick off real-time research tasks where it pulls in information from lots of different resources in real time. It has a central agent that's figuring out how to break down the task into lots of small subtasks. And then it spins up lots of sub-agents to go off and do the individual pieces of research. Yeah, so that's super interesting. That, in the spirit of how fast this world is changing at this exponential rate, you.com did not offer that when I...

last checked it a few months ago. And of course, now it does. Yeah, no, that's super fascinating. I'll go and give it a look out. Yeah. Yeah, yeah. It's brand new. So what other tools? You mentioned that you could tell us what other tools you use other than Claude. And I'd love to know what other kinds of day-to-day tools are you using? I mean, I use pretty much the Google Gemini, the research capability in Gemini Advanced. I use Notebook LM for

certain classes of wide-scale research when I've got lots of academic papers that I need to go through. I use a tool called Fixer, which is F-Y-X-E-R. I'm an investor in this company. And so that looks at my Gmail and it does its best to pre-can my responses so that when I go through my Gmail, it's often given me...

Two responses. Yes, I can attend your party. And no, I'd love to. I'm really bummed that I can't, but I can't. And then I just choose the one I want. And that saves me quite a bit of time. But recently, I've been playing around with workflows that involve a few agents. So in this case, I might want to have some support for an idea that I'm thinking about. So say, for example,

I'm thinking about parallels between the growth of capacity in compute and data storage with the growth of capacity in the railway industry in the 19th century, right, where there was overcapacity. So in the old days, I might have gone just to Claude. And the old days, I mean, November 2024, I would have gone to Claude and I would have said, Claude, look at this from the perspective of a historian of technology.

And then I'd say, now, Claude, look at it from the perspective of an investor and give me a view. And I would manually go through this. So now I have this workflow where I can define three experts. And one might be a historian of technology, one might be an investor, and the third might just be a cynic. And then I will put the question to this network of...

which are all sitting on top of different large language models. And a fourth agent will be an orchestrator and will ask them to argue between themselves until they've got to a point where they either fundamentally agree or disagree. And then it gives me my final result. So then I run that process and it takes about five or six minutes. Like you said, you.com will take five to 10 minutes. And it does cost something because I'm burning through tokens. So it'll cost me 20 cents.

After five or six minutes, about half the time, I've got a really, really good critical overview of my issue, which I can then go off and do more research on. Half the time it fails. It just gives you complete pabulum rubbish. And the way I look at that is, you know, when I've worked with teams and you give them these open ended research tasks of teams of humans, about half the time they don't give you anything good.

That's the nature of it, right? And about half the time they do. So my team uses these tools. I use these tools. But that's one of my favorites. It's one of the key properly agentic workflows that I now use. And we designed and we built it in one of these sort of agent workflow platforms that are sprouting up everywhere. And so what API are you calling in there? You mentioned burning through tokens.

Yeah. So you can choose any one that you want. I mix and match. I tend to have one O1 in there, which is the open AI reasoning model that's meant to be much, much more structured. I always have at least two Claude 3.5s because they're so good. But I think that you want to get to my point. I spoke about this earlier that.

When you're learning, you want to get closer to the metal than be abstracted too much by the product layer. So what you want to be able to do is configure the models a little bit more, like play with their temperature score. So temperature is this, how straight-laced or wild is the temperature going to be? So temperature 0.1 is like your...

pastor at church and temperature 1.5 is like van wilder party liaison and uh you know you want you want a mix of those too but you want your evaluation model to be very quite sensible but still creative so temperature of one so you need to you can only access those parts of the api if you

are talking to the API rather than having it sort of interfaced by some third-party tools. So increasingly, we try to get closer to that so we can learn through experimentation what works for each different context. AI is transforming how we do business. However, we need AI solutions that are not only ambitious, but practical and adaptable too. That's where Domo's AI and data products platform comes in.

With Domo, you and your team can channel AI and data into innovative uses that deliver measurable impact. While many companies focus on narrow applications or single model solutions, Domo's all-in-one platform is more robust with trustworthy AI results, secure AI agents that connect, prepare, and automate your workflows, helping you and your team gain insights, receive alerts, and act with ease through guided apps tailored to your role. And the platform provides flexibility to choose which AI models to use.

Domo goes beyond productivity. It transforms your processes, helps you make smarter, faster decisions, and drive real growth. The world's best companies rely on Domo to make smarter decisions. See how you can unlock your data's full potential with Domo. To learn more, head to ai.domo.com. That's ai.domo.com.

Nice. Yeah, that makes a lot of sense. And I don't think, did you happen to mention the tool that you're using for orchestrating this? I don't know if you mentioned that. Oh, so I use one of two different tools. I use one called Wordware, which I'm an investor in, and another called Lindy, lindy.ai. So they're both, they're sort of similar but different. I mean, again, these are all products that are finding their product market fit. I mean, Wordware, of course, I recommend people try first. Disclosure, I'm an investor.

Nice, yeah. And yeah, lindy.ai is L-I-N-D-Y. Yeah, that's right. Awesome. All right. So that was a great example of one of the kind of unstructured asides that can be very helpful that we can go off on. No, I love it. But to kind of quickly wrap up our conversation around this exponentiality and kind of projecting forward decades into the future, I have...

Two specific topic areas that I'd love your thoughts on. This first one can probably be quite short. So at the time of recording, I've just released today, so it's episode 851, an episode on quantum machine learning. And in chapter one of your book, you talk about as we near the physical limits of Moore's law, quantum computing could play a bigger role. I'd love your thoughts quickly.

It seems like you don't see quantum computing as just kind of this tack-on, cute thing that's helpful in a relatively small number of scenarios, but potentially something more transformative. You know, I think that quantum is quite hard to get a grip on. So when I wrote the book, Google had just announced that breakthrough with their sycamore.

uh quantum chip which i think was a sycamore where they had done the that that sort of toy test lab test where you generate random numbers and it would take a normal computer quadrillions of years like much older than the life of the universe it takes a quantum computer a minute and then a few months ago they announced the same breakthrough again with with a big fanfare but you know the people sitting around them had um you know more certainty i i think the

The thing that we have to hold in our heads, John, is that people who are building quantum computers have come having been scientists. And scientists always see the world as it could be rather than as it has been. And so their framing of timeframes, I think, is different to you or I or most of our listeners who probably have products to deliver. So that's a roundabout way of my saying,

I don't know when quantum computing will show up and be genuinely, genuinely useful. If you have these quantum computers with the tens of thousands of logical cohered qubits or million logical cohered qubits that you need to do real quantum computing, I think.

really interesting things and amazing things will happen. But there was a while, and I'm sure you were familiar with this period of time, where for a couple of years, people were saying, look, we're getting so many insights from quantum that we can start to simulate quantum on GPUs. And that is allowing us, giving us kind of algorithmic tools to do things we couldn't previously do, and we would never have discovered that.

without quantum. I mean, that might be the case. But the other problem that you've got is that the world is being eaten up by the transformer architecture, whether it's protein discovery, materials discovery, robotics, like this is the way to do everything. And so it might just be that the intersection is not as big as we thought it was between quantum machine learning, that we have to just wait for quantum computers to be

developed and delivered at that scale where it really makes a difference. And in the meantime, we can just do really, really well applying this transformer architecture and, you know, all the amazing chips that Jensen Huang and Nvidia are producing. And that will kind of knock down the doors. So it's a really tough one. If I had to...

If I really, really kind of summarize that in a sentence, it's continue to build quantum computers would be my view. But there's so much you can do with this AI wave that maybe there's a good reason to spend time there. Yeah, very nice. Great answer. And yeah, if people want to dig into...

over an hour of discussion on quantum machine learning, what's possible today, what could be possible in the future. Episode 851 is a great one to refer back to. My final kind of exponential view question for you before we get into some more data science-specific stuff is I'm optimistic that one of the biggest challenges of our time, climate change, that AI can play a role in the transition away from

carbon-based energy toward renewable energy. And this could include fusion energy. We have commercial labs now with private investors that are expecting a return on not crazy long time horizons. And there's a dozen different labs trying these commercial fusion approaches. But even without fusion power, solar panel efficiency and the crazy exponential growth that solar panel installation has had.

in recent decades. And if that kind of trend continues, it's, I am hopeful, I'm optimistic, but maybe it's just because I'm an optimistic person that we will be able to tackle some of the worst effects of climate change. And in our lifetime, we may even be able to start reversing them by, you know, if you have abundant energy through fusion energy, for example, you can be pumping carbon back into the ground and storing it and reversing some of the effects that we've had. So yeah, what are the...

Kind of broadly speaking, I'd love to hear what you think the potential risks and benefits of transitioning from an economy dominated by oil to one driven by AAI and renewables is. Well, yes, I will say to your last statement, I mean, I really agree with that. Let's kind of distill that reasonably quickly. So we know that AI requires...

energy build out. But in reality, I've written about this a couple of times in the New York Times and in the Financial Times in the UK as well and elsewhere. I actually think that the build out, the demands will...

to catalyse a greater discipline in how we build the energy system. So then the question is, what's really happening with the energy system? And what is happening is that energy is going from being a commodity where it's all about the oil price or the gas price. And by the way, over 200 years since we've had coal and 100 years or so that we've had oil and since we've had natural gas, the cost of energy from those three systems has not got cheaper. It's got, in some cases, more expensive because it's dependent on

the local dictator or autocrat and kind of physical extraction. Whereas the cost of a solar panel has dropped by, I mean, I'll forget the exact numbers, but 99%, I would be safe saying over the last 20 years. And that's why you see an exponential growth in the amount of solar that's being installed worldwide. And that pattern also, by the way, of exponential cost declines and growth happens with

with batteries, because it's obvious to most people, the sun doesn't always shine and the wind turbines doesn't always blow. But that's actually a completely solvable design problem. And the way to understand what's going on is energy becomes a technology. Technologies are things that get cheaper. They get cheaper through learning rates, but also through modularization and miniaturization.

And I think the best analogy that listeners will understand, especially those who are over 47, I guess, is the shift from the telecoms network to the Internet. So the telecoms network was like the old fossil system, incredibly reliable, controlled by a few companies because it was really expensive to get get into it. You needed like a billion dollars, which is what you need to build a coal or a gas plant.

The Internet comes along with these technologies, fiber optics, fiber optic switches, optical networking, chips, RAM, routers. Prices decline. And what you see is a dramatic decentralization. Lots of Internet service providers show up. Anybody can now run a call waiting or a voicemail service hooked onto the end of an IP system. And the Internet today is much more reliable. It's much cheaper and it is better than the phone network ever was.

And I think that that same parallel will happen with our transition to fundamentally to solar plus batteries. But wind, traditional nuclear and maybe one day fusion will also play a role. And it will be a better, cheaper, more dynamic and adaptable energy system where energy costs will be much, much lower, will have much more energy. And to your point about reversing some of the worst sort of impacts of climate change.

Because most of these things that we say are expensive, what we actually mean is they take a lot of energy and energy is expensive. Well, they'll still take a lot of energy, but energy will be super cheap. So they will now be cheap. And that would be carbon capture and desalination and sort of other types of things that we could do. But, you know, we're talking about different timeframes. I think that latter part is...

is decades. I think the former part, which is the fundamental transformation of the energy system, is probably measured in a couple of decades, you know, rather than, you know, multiple fives or sixes. And that's where we are. And I think that is a reason to be somewhat optimistic. Great. It's nice to have.

you know, for my selection bias to allow me to get, you know, more optimism. I can give the, I can give the reverse argument and I'll just give the reverse argument for a second, right? The reverse argument is, um, it's very hard to make because it's not grounded in empiricism. So, um, the reverse argument is, well, there's too much materials required for solar panels and there's too much, you know, required for lithium. The truth is,

There's a lot of science and Nature magazine has had peer reviewed papers on this that show the materiality of solar panels is much lower than the fossil system. The truth is there's tons of lithium. We just haven't got around to extracting it because we didn't need to. And now we need to. And the the declining cost of batteries and the fact that we're only just starting to invest in battery research means prices will.

will come down dramatically. And there was a paper by Oxford University researchers about two years ago, which said that a fully renewable energy system would be cheaper to run than a fossil one. And the faster we went towards a fully renewable one, the quicker we would start to save money. And the number they had was, I don't know, some trillions of dollars, more than you and I were planning to make in the next three years.

Oh, man. Sorry to disappoint you. We're going to have to, well, it's not too far. If we're able to get that exponential growth to a billion dollars of revenue in three years, I mean, how far are we off? Maybe six or seven. That's fine. I'll see you on Mars. I love how in your most recent answer.

You started off by saying, I can make the counterpoint, but then you very quickly started eating into the tenets that were the foundations of that argument. Well, because the foundations are so shaky. That's the problem, right? They're just not grounded in empiricism. What they're grounded in is, I'll share a couple of things. Being a guy who's been around for a long time, I remember the CEO of one of the biggest mobile phone companies in the UK telling me he would never.

allow their customers to pay their bills over the internet this is back in 99 it was true he was fired the next year but a lot of these challenges are based on like holy cows right sacred tenants that you can't you can't challenge and if you go back to the first principles thinking that you brought up earlier in this discussion and you you will realize that that's what you are uh you're you're contending with

Very nice. So moving on to another topic here. So switching gears a bit, we've kind of, we've done our discussion of looking far into the future, you know, leveraging your exponential expertise. I'd like to now dig into some topics that are maybe more specific to the kinds of listeners that we tend to have on the show, you know, technical listeners, data scientists, software engineers.

In your Substack newsletter, the Exponential View newsletter with 105,000 subscribers, you have a recent article called Why Humanity Needs AI. And it advocates for leveraging AI to address complex problems that require knowledge beyond human constraints. So what are these constraints that humans have on knowledge and how can AI overcome them? It's a great question. That was a fantastic essay. Thank you for drawing attention to it.

AI is a tool for us. And if we think about any work that we do, our tools help us, right? And if you're a data scientist, you have probably used SED or ORC, right? I mean, I think that's fair to say.

It's enabled you to do things you otherwise couldn't have done, right? You've got 15 million lines of data and you've got to tidy it up. So you go off and you use this tool. So we've always used tools to improve our capabilities. But as humans, we've also always solved more and more complex problems. And we've done that since we emerged from the African continent 100,000 years ago.

But the challenge that we now face is that, and this was the point of the argument, it was a hundred year perspective, is that we know that the human population will peak in about 60 or 70 years. So when it peaks in 60 or 70 years, even if we are much more well skilled and have better tools, the number of people who could actually go and get involved in problem solving is going to decline and it will continue to decline. And for the first time in the history of our species.

We won't be creating more knowledge, more science, more problem solving the next year than we did in the previous year. And so the only way to get around that and to kind of continue that attribute of what it has been to be humans for a couple of million years is to have tools that can magnify our capability significantly. And that's where that's where AI comes in. AI becomes absolutely critical for continuing the.

to develop the knowledge production of humanity. Now, let's boil that back down to something more practical for a data scientist today. If you are looking at a stream of transaction data, there's no way that you personally could identify anomalies.

You probably couldn't identify anomalies with the tools that I had in the early 90s, which was basically GREP and REGEPS. You need something more sophisticated. Today we call it machine learning, but 10 years ago we may have called it AI. And so we're already in a point where we can see patterns, we can see relationships that exist in dimensionalities that are not obvious up front.

because we can bring these tools to bear. And I do think there's something really essential about how they relate to how we make sense of a more complex world, which we are ourselves also building. Nice, yeah. Great answer there. And something that this makes me excited about, and I've talked about it on the air before.

In fact, I did an episode on this idea of an AI scientist back in episode 812. And so it was a Japanese company founded by a lot of Google DeepMind people, if I remember correctly, Sakana. And so they had this AI scientist paper where it was just machine learning specific because that was kind of a neat area for them to develop this AI scientist that is proposing research ideas and executing them because with an AI scientist,

With an ML researcher, you can run experiments in silico. You can run them on computers. You can provide a budget for these experiments. But the same kind of thinking could be applied to biology and chemistry, material science, where I know that there are teams working on this. I don't know how well advanced it is, but where you have AI systems controlling pipettes and ingredients and actually running experiments that could be allowing us to have...

biological, chemical, materials, breakthroughs, whatever, whatever scientific field. And so this is a really, this is a really exciting idea because these AI systems, and again, actually, so the U.com co-founder, Brian McCann, back in his episode, 835, he talked at length about how this will transform science. And it is really exciting. It's also, there's something.

That does start to, as we start to have AI systems be creating knowledge and coming up with ideas, which as Brian McCann pointed out, it's going to be, it seems very likely that these AI systems will have insights that we could never dream of having because these AI systems will have, they'll have well-trained neural network weights across all human knowledge, across all academic papers, all textbooks, which is something a human could never.

endeavor to do a small, small percentage fraction of. So yeah, so it's really exciting. But simultaneously, it's also this kind of, that will feel like a different era because, you know, that's, yeah, it ties into artificial general intelligence kinds of concepts and where, you know, where does that leave humans? And if, you know, even if our knowledge, yeah, yeah. But if we come back to this question of the AI scientist,

And these systems being, you know, on the one hand, they can help us as tools. And where we are today is they help us as tools. The Sakana paper I thought was particularly interesting. It, as you said, went through essentially the traditional way that a scientist will do their work. It'll generate.

some research ideas that might be novel. It might conduct some literature searches. It was able to write and execute the code. It could run experiments. And then ultimately, it could write a research paper. And then you say, well, let's do this in chemistry. And this is lab automation, wet lab automation that people are working on. And there's an AI model called ChemCrow, C-H-E-M-C-R-O-W, that did a sort of Sakana light version in research around chemistry a while back.

But, you know, I think it's also the case that for a lot of scientific breakthroughs, most humans don't understand them. So when Rosalind Franklin was looking at those X-ray crystallography shots of what then became DNA, she's the one who spotted this has to be a double helix pattern that's being cast effectively.

3 billion of us at the time, or 2 billion people at the time, most of us would not have been able to see that. When mathematicians, when Terence Tao comes up with a new conjecture in maths, it takes people a decade to unpick it and make sense of it. And only five mathematicians can figure out what's going on. And that's true for all mathematical research. So I think this idea that we can't understand it is also something that's already true.

to some extent. I think what becomes quite interesting is when new methods of research start to emerge. And I don't think we've seen what AI systems could do in terms of new ways of actually conducting science that aren't just a faster version of a human conducting.

science right and i think that that's quite an interesting um question to to ask and i mean what what i would say is that also um the other thing that they can do right away is they can bridge the silos of science so one thing that's happened in science over the last 50 years and it's really connected to the funding train that is required right in order to get funding you have to

tell a grant that you're going to do X or Y. So as the PI, the principal investigator of a lab, you tend to narrow your focus. People are not as freewheeling. PhD students just kind of clunk through version 3.6 or 3.7 or 3.8 of their PI's thesis. And things get very narrow. And there are very few polymaths who sit across domains. Actually, scientists today can be polymaths because they can go to an LLM. They could go to a sort of science specific.

tool like illicit or consensus, and they could say, find analogous concepts across all of these areas to help me make sense of the world, to see if there's insights elsewhere that allow me to form a hypothesis. And I think that that is also really, really powerful. In this kind of enormous toolkit that we're being given, there's another tool that changes the way science gets done.

What a fabulous answer. I love that. And you provided a number of specific tools there that I hadn't heard of before, but that listeners can dig into after the episode. That's fantastic. Thank you, Zim. As my last kind of topic area to discuss.

And it follows on nicely from this idea of, you know, the AI scientist and replacing, you know, kind of what it means to be human. I loved the example you made there, kind of like Rosalind Franklin or, you know, a math guru being able to see things that, you know, only a handful of people on the planet out of billions would be able to see anyway. And so that is, there is something actually kind of reassuring in that, that, okay, you know, this is just another set of brains that will be doing things that we can't understand. That is cool.

It might make people feel uncomfortable if, you know, those brains can be much cheaper and much more effective than maybe what we're used to doing day to day. And we probably, you know, people listening to this podcast, they are probably amongst, you know, they're probably kind of around the 99th percentile of people who are adapting to new shifts on the planet. You know, we're trying to stay abreast of all these technological changes.

How can I be using LLMs like you just described as a scientist to be coming up with new ideas? And of course, you mentioned already earlier in the episode, tools like Claude or GitHub Copilot that we can be using to augment ourselves as software composers. So those kinds of tools are out there, but it doesn't seem like we're too far off from a lot of what software engineers, data scientists, AI engineers do.

being replaced completely by machines. And part of what makes us so susceptible in our career is that a lot of what we do could be done remotely. And in fact, since the pandemic, a lot of people have at least some days of the week. If you work in a situation where keyboard and microphone inputs and outputs are all you need to be doing your job, that is an easier

kind of role to disintermediate with an AI system than if you are washing bed sores off of comatose patients in a hospital. You know, that's something you can't do remotely. So yeah, so you have been an investor in more than 50 tech startups since 1999. You no doubt have interesting insights into where technology is heading, but also where work is heading.

With the automation of many jobs on the horizon due to technological advancements, what are your thoughts on the future of work? And maybe where is the solid ground that our listeners can find in terms of skills they could be developing or career shifts they can be making to hopefully continue to have a job in the decades to come? I don't think there's any solid ground. And I also...

understand why people feel apprehensive you should feel apprehensive uh you should go off if you're not if you haven't had your holy shit oh shit moment with uh llms where you sit and spend two or three days and throw the hardest problems you can at them and you get really impressed with what they can do you need to go off and do that and go through your mini existential crisis and and come out the the other side um we already live in a world where where jobs change really frequently 53 percent of

Gen Zs want to be influencers. When I launched that company that had hired a data scientist back in 2010, we were the first influencer ranking, sorry about this, platform on just Twitter at the time in the world. And, you know, now 53% of people want to be influencers. It's only been 14 years. An economist called David Autor looked at about 80 years of US employment data, and he pointed out that 60% of jobs.

Today's jobs didn't exist in 1940, and they require expertise that didn't exist at all during that time. You know, when I, in my first job, I worked at the Guardian newspaper, but I was also the Guardian's first load balancer. So a lot of you will use something called ELB, Elastic Load Balance, on Amazon. In 1996, the Guardian used my right hand. We had three servers.

and two internet, two ethernet connections. And as one would get its memory leak and crash out, I would pull the ethernet cable out and plug it into the one that had just rebooted. And then 10 minutes later, I would do it for the next round robin that during the peak times during the day. So all sorts of new jobs get created during this time. And what we can do is we can look at what's happened over the last 30 or 40 years. There's been a huge amount of middle-class job growth, right? Jobs crew.

constructed in the types of things you were talking about. We work remotely, we work behind a desk. And at the same time, a lot of lower paying service sector jobs that have been created. And that creates a lot of tension. So we should bear that in mind. I think the second thing we need to bear in mind is that people don't really, really know where this ends. It's very hard to predict the third, fourth, fifth bounce of the ball. You can look back at history and look at people's emotional responses.

And that may give you some insight. My sense right now is that the way my work has changed and the way that my team's work has changed is that everybody is more boss-like. Everybody has to set goals and objectives to teams that are made up of one or more AI systems. And you have to be better at having those boss-like skills. My domain knowledge and my experience and judgment gives me some advantages in that.

In a sense, look, there's no solid ground. We really don't know how fast this technology will develop. We really don't know what will exactly happen to the job market. We can expect a large number, maybe the vast majority of jobs to change. We can expect new jobs to be created. We can expect some jobs to go away. And the way to prepare for that is ultimately to skill up.

And by skilling up, you actually put yourself in a position where you can somewhat define your future a little bit. And I say that just in the sense of being really, really frank and honest about about where we are. I mean, people don't know there's increasingly good research coming out, but that will still be only limited. And maybe in 20 years we will know. But even then, perhaps we're still asking the question.

Very nice. Well, as we start to reach the end of this episode, I really appreciate your thoughts on these things, even if there is, you know, I'm kind of hoping for that magic solid ground for you to be able to tell me about in addition to our listeners. But, you know, so if people want to listen to your thoughts after this podcast episode, obviously they have your newsletter, Exponential View. We'll have a link to that in the show notes.

Your book, Exponential Age, which we talked about. We talked about a number of topics from that in today's episode. You also have the Exponential View podcast, which podcast listeners might enjoy. Kai-Fu Lee, one of the biggest names, one of the biggest AI authors out there, was on the show just last week at the time of recording. And you've also had Andrew Ng, Fei-Fei Lee, a lot of amazing guests on that show. So that's a podcast for people to check out.

We did have, I pointed out last week on social media that you would be coming and I kind of highlighted these, you know, those kind of biographical points about you. And I asked if people had any questions for you. We had a great question come from Elizabeth Wadsworth, who is an IT applications manager in Ohio. And Elizabeth asks, in the future, do you think the same systems and theories will apply?

So some of the futurist work that she studied suggests systems and theories are a good structure to impart when the future of tech is unknown due to rapid advancement. Do you think that we're in the middle of such a rapid transformation that these systems and structures will no longer suffice? And I think that's a really interesting question. It also relates to a point that you and I discussed before recording today's episode, which is that...

The kinds of structures that existed, corporate structures in the workplace, you know, we are still, for the most part, companies are adhering to those structures when in this world that we now have LLMs and we can be doing that kind of taking on those kind of boss-like roles as an individual contributor, that fundamentally changes things. So I think it kind of relates here. And so I'll leave it to you. Yeah. I mean, it's a great question from Elizabeth. Thank you for asking such a good question.

In general, history has said they don't stay the same. They do dramatically change. If you go back to factories before electricity, mechanical power was provided by steam engines. They blew up very regularly and they couldn't distribute power very far because you did it through a set of friction pulleys and belts and the friction would sort of...

ecoff heat and energy and so you couldn't get as much work done so factories were small they were vertical um and you were never too far away from the central drive shaft and that was the design principle and process and the kind of work you did and the kind of work you asked people to do was determined by that and by the fact that actually your workers um couldn't miniaturize the power uh to to a small handrail right it all had to be big

clunky things. I'm not a mechanical engineer, so I don't know what they're called, but just assume it's something that's really big that just goes thud, thud, thud a lot. So when you turn up with electricity, the first thing that people do is they stick a light bulb in the room. And then the second thing they do is they try to get the electricity system to replace the steam engine, but it just won't have the physical power at that point. And of course,

Factories don't look like that anymore because you can essentially efficiently distribute electricity across a wide horizontal area. You can partition it so you can put massive amounts into an arc furnace that's being used to produce steel and really small amounts into an LED light that's over a desk station where I'm sitting there with a tiny little...

electric brush cleaning a small component. And you can do all of those things. And so the processes by which you organize the work, skills people need, the way you maintain things, where you measure things, changes fundamentally. I would say with AI, back to what are extreme assumptions and what are not.

It's more of an extreme assumption to say nothing will change when we implement AI than is to say loads of things will change. And let's just go back to look at what the electricity versus steam change gave us 120 years ago. And I think that comes back to the point that you and I have made, John, throughout this discussion, which has been we've just got to get using these things because that's how you.

put yourself in a position to co-design them and to sort of make them work in a way that you think is a sort of good way of doing it. Nice. Great answer there. Yeah. And so thank you for answering that listener question. I am conscious that we are out of scheduled time here. But really quickly, in addition to your Exponential View newsletter, in addition to the Exponential Age book, the Exponential View podcast, and then there's also for people...

Actually, you can watch this online anywhere in the world. There's a Bloomberg TV show that you hosted called Exponentially, and that included guests like... How do you pronounce it? Sam Altman? Altman, yeah, that's right. He does some AI stuff, I think. So yeah, Sam Altman, Niall Ferguson, Dario Amadei are amongst the guests that you had on that show. So that's another piece of content that people can be checking out from you.

how should people be following you? Or is there anything that I've missed? Oh, you've been so generous anyway, giving me this fantastic conversation. I think the best way is if you just go to Exponential View, put it into Google or Bing or whatever search engine you use, and you just sign up to the newsletter, that's the most straightforward. Social does not work as it once did, but the newsletter is the best place and would be delighted to see you all.

Fantastic. Yeah. And then, so you have a vast library behind you that we can see. If you're watching the video version of this, you'll be able to see this. Before I let my guests go, I typically ask them if they have a book recommendation for us other than their own books. I imagine in your case, it would be tricky to get it down to just one. Do you know what? I have got a lot of books that I really, really like there. And I'm going to recommend one that...

It's really long. It's about 800 pages long. And it's called The Prize by Daniel Yergin, Y-E-R-G-I-N. And this is about the story of the oil boom from the 1860s on as it started in Pennsylvania and in Baku in Azerbaijan and over in Indonesia. And what I found really interesting when I reread that book last summer was the.

politics, the machinations, the attempt to influence policy, the backstabbing, the way capital lent itself to the market reminded me so much of what's going on with the big tech companies around AI and the personalities involved. And I thought, wow, this is so insightful. We've kind of run this tape once before. And certainly the first four or five chapters are really interesting.

as a way of kind of throwing a light on where we are today. It's actually incredibly well written and it's sort of worth persisting all the way through the book as well. So Daniel Juergens, The Prize. Fantastic. Thank you so much, Azeem. Thank you for being so generous with your time. It's been, as I said at the outset of this episode, unreal for me personally to have you on the show. I'm sure our listeners enjoyed this episode as well. Thank you so much for taking the time. It's really been my pleasure. Thank you.

Boom, such a treat to have Azeem Azhar on the show. In today's episode, Azeem covered how the world is experiencing exponential technological growth with computing power, that's flops, increasing over 60% annually for over 50 years. He talked about how AI will be essential for continued knowledge production and problem solving as global population peaks and begins to decline in 60, 70 years. He filled us in on how the energy sector is transitioning from a commodity-based system to a technology-based system.

with solar and battery costs dropping exponentially, like computer chip costs. He talked about modern AI workflows and how these often involve multiple specialized agents working together, orchestrated by a supervision agent. Azeem himself uses tools like Claude, Gemini, Notebook LM, Wordware, and Lindy.AI to automate his life and make things easier in his own workflows. He also talked about how the future job market will see massive changes with most roles evolving.

or being replaced in the coming decades. Success then requires developing boss-like skills to direct AI systems. As always, you can get all the show notes, including the transcript for this episode, the video recording, any materials mentioned on the show, the URLs for Azeem's social media profiles, as well as my own at superdatascience.com slash 855. And if you'd like to connect in real life as opposed to online, I'll be giving the opening keynote at the RVA Tech Data Plus AI Summit.

in Richmond, Virginia, on March 19th. Tickets are quite reasonably priced, and there's a ton of great speakers, so this could be a great conference to check out, especially if you live anywhere in the Richmond area. It'd be awesome to meet you there. Thanks, of course, to everyone on the Super Data Science Podcast team, our podcast manager, Sonja Brajovic, our media editor, Mario Pombo, our partnerships manager, Natalie Zhaisky, researcher Serge Massis, our writers, Dr. Zara Karche and Sylvia Ogwang, and, of course, our founder, Kirill Arimenko.

Thanks to all of them for producing another exponential episode for us today. For enabling that super team to create this free podcast for you, we're deeply grateful to our sponsors. You, yes you, can support this show by checking out our sponsors links, which you can find in the show notes. And if you'd ever like to sponsor an episode of the podcast yourself, you can get the details on how to do that by making your way to johnkrone.com slash podcast. Otherwise, share the show with people who might like this episode.

review the show on your favorite podcasting app or on YouTube. Subscribe, obviously, if you're not already a subscriber. Feel free to edit our videos into shorts to your heart's content. Just refer to us when you do that. But most importantly, I just hope you'll keep on tuning in. I'm so grateful to have you listening, and I hope I can continue to make episodes you love for years and years to come. Until next time, keep on rocking it out there, and I'm looking forward to enjoying another round of the Super Data Science Podcast with you very soon.

This transcript was generated by Metacast using AI and may contain inaccuracies. Learn more about transcripts.