Checking In with Parmy Olson on AI, Chat GPT, and the Race that Will Change the World - podcast episode cover

Checking In with Parmy Olson on AI, Chat GPT, and the Race that Will Change the World

Oct 04, 202442 minSeason 3Ep. 7
--:--
--:--
Listen in podcast apps:
Metacast
Spotify
Youtube
RSS

Episode description

Hosts Becky Buckman and Keyana Corliss host U.K-based Bloomberg Opinion columnist Parmy Olson in an eye-opening discussion of her new book, "Supremacy AI: ChatGPT and the Race that Will Change the World." Parmy explains the history of the intense rivalry between Sam Altman of OpenAI and Demis Hassabis of DeepMind—two of the earliest AI innovators—as they compete to shape the future of artificial intelligence and our world as we know it. This episode offers a thought-provoking look at the intersection of technology, ethics, and regulation, with Parmy urging listeners to consider both the potential and the pitfalls of AI in our society.

The conversation also explores some of the ethical challenges with AI, including how OpenAI and DeepMind's alignment with tech giants like Microsoft and Google, respectively, has compromised their original nonprofit ideals. Parmy explains the dominance of big tech companies in AI research and development, and the challenges this poses for smaller startups and independent academic research. She also emphasizes the critical need for unbiased data in AI models and advocates for stronger regulations to guide AI's development, highlighting the significant influence of big tech on this rapidly evolving field.

“That's where I see generative AI going is that, maybe it won't displace as many people as we think. It will just create a lot more noise in our information ecosystem. We're just going to have to get better at filtering it out.” —Parmy Olson

Join technology comms pros Becky Buckman and Keyana Corliss as they cut to the heart of today’s tech-news cycle and the general craziness that is high-tech corporate communications right now. With a short, not-too-serious take on the industry - with plenty of humor and irony thrown in - they’ll bring you the best in the biz, across comms and media together, for one-of-a-kind insights and perspectives you won’t hear anywhere else!

About Parmy Olson:

Parmy Olson is a technology columnist with Bloomberg covering artificial intelligence, social media and technology regulation. She’s been writing about the growth of AI systems since around 2016, when she worked in Silicon Valley as a reporter for Forbes and covered the early rise of chatbot technology. She continued covering AI as a tech reporter at the Wall Street Journal, publishing multiple exclusive stories and investigations on surveillance, facial recognition and Google’s AI work, including an investigation into how Google stifled DeepMind’s secret efforts to spin out as a non-profit organization to protect its AI from corporate interests.

Parmy has received two honourable mentions for the SABEW business journalism awards for her reporting on Facebook and WhatsApp, and was the first recipient of the Palo Alto Networks Cyber Security Cannon Awards for her book “We Are Anonymous.” She was also named by Business Insider as one the "Top 100 People in UK Tech" in 2019 and was described as “tech journalism’s deep diver.” Parmy was recently nominated as Digital Journalist of the Year 2023 by PRCA, the world’s largest PR professional body.

Resources:


Contact Rebecca Buckman and Keyana Corliss: 

Transcript

Keyana Corliss: [00:00:04] Welcome to Just Checking In.

Rebecca Buckman: [00:00:06] I'm Becky Buckman.

Keyana Corliss: [00:00:07] And I'm Keyana Corliss. Each week we'll use humor, a little irony, and definitely some self-deprecation to dive into the world of high tech corporate comms.

Rebecca Buckman: [00:00:16] We'll use our expertise in less than serious take on the tech news cycle to bring you the best in the business across comms and media, for one of a kind insights and perspectives you won't hear anywhere else.

Keyana Corliss: [00:00:27] Get ready to laugh, this is Just Checking In.

Rebecca Buckman: [00:00:34] Welcome to Just Checking In. With us today, in what's going to be a great conversation I hope, is Parmy Olson. Parmy is a London based columnist for Bloomberg Opinion, covering technology, specifically AI, social media and technology regulation. A former reporter for The Wall Street Journal and Forbes, Parmy is the author of 'We Are Anonymous', as well as a brand new book, I think it was just out in early September, about AI, and that's largely going to be the focus of our conversation today. This brand new book is called 'Supremacy: AI, ChatGPT, and the Race that Will Change the World'. It focuses on the intense race between two AI entrepreneurs, Sam Altman of OpenAI and Demis Hassabis of DeepMind, which is based in the UK. So we can't wait to dive in. Welcome, Parmy.

Parmy Olson: [00:01:23] Thank you. It's great to be here with you guys.

Keyana Corliss: [00:01:25] And I would just like to toot our horn here at Just Checking In, because you know I love to do that. I would just like to say we got the book as it came out. So I feel very special. So I just want everyone to know that they mailed me a book before it came out. So that's it.

Parmy Olson: [00:01:41] You guys are on the ball.

Rebecca Buckman: [00:01:42] Cutting edge. We're scooping.

Rebecca Buckman: [00:01:43] Well, listen, I am really, really, genuinely excited for this interview because, Parmy, as I mentioned to you before, I really thought this book was quite astonishing. Before we dive into that specifically though, we did want to start off with a more high level question. We feel like your work, in looking at the stuff you're writing for Bloomberg, your previous book, it often explores the intersection of technology and society. What initially drew you to cover topics like this? Hackers, big tech companies, and cybersecurity.

Parmy Olson: [00:02:14] With the book about 'Anonymous', I wasn't drawn necessarily to the art of hacking or cybersecurity itself. It was the fact that there was this underground community of people who were living their lives almost online, and they were trying to disrupt other companies, it was this new form of activism known as hacktivism. They were hacking companies to try and get a point across, and I was just so intrigued by these communities that were bubbling up. And a lot of times, what really fascinates me about tech is the people behind the technology. I spent almost three years living and working in Silicon Valley between 2012 and 2015, and I was just so struck by how people out there, where you are, Becky, are just living in the future, practically. And they have such big ambitions, sometimes of galactic proportions, crazy ideas. And that was what drew me, also, to this latest book was, it wasn't just AI, it was the fact that there was this quest to build AGI or artificial general intelligence. So it was almost this outlandish idea of building an AI system that's as smart as, or smarter than, human beings and can carry out the same kind of cognitive tasks that we can. And the idea behind this was the two men who I focused on in this book, these two AI builders, Demis Hassabis and Sam Altman, was that they believed this AI could solve all manner of problems. Which we can get into, but again, the real attraction for me to tech itself is the people in it who are these dreamers a lot of the time.

Keyana Corliss: [00:03:53] I went to a very techie school. I went to Carnegie Mellon here in the US. I'm going to age myself a little bit. Not as much as Becky, but, no I'm just kidding.

Rebecca Buckman: [00:04:04] Thanks, Kiana. Yeah.

Keyana Corliss: [00:04:05] Just kidding.

Parmy Olson: [00:04:06] Oh boy. I like this podcast.

Rebecca Buckman: [00:04:08] Right, we rip it here.

Keyana Corliss: [00:04:09] I remember this is from 2003 to 2007, and that was not a time where tech has the same sort of broad awareness that it has today. A lot of times, I think the rest of us kind of didn't understand what the obsession was. But I think that the type of people who are, like Steve Jobs, the people who are crazy enough to think they're going to change the world, do. You saw this real obsession, and I think a lot of people don't realize that AI and machine learning and all these things are actually not a new concept. They've been happening for years and years and years. They've just made it out into the mainstream, and obviously big tech and the focus on it has brought that to life. A lot of this book is about the power that big tech holds over AI.

Parmy Olson: [00:04:57] That is one of my main messages, yes. And the lack of governance.

Keyana Corliss: [00:05:01] And the lack of governance, exactly. And I think there was a lot of human focused narrative telling this story in a way that I honestly hadn't seen and been told before. I want to ask you, how did this book come about? Was it from your reporting? What made you think to yourself, I need to write a book from this angle as opposed to- (These two guys.) Yeah, these two guys from a human storytelling-what brought that along?

Parmy Olson: [00:05:29] So after ChatGPT came out, it was just such an awe inspiring moment for me. I know I'm probably picking it up a little bit too much, but genuinely, when I first started trying it out, I thought, this is really something that is going to be quite transformative. This is a tool that can quite easily, in some cases, replicate professional tasks, and that's going to have pretty big consequences. I just thought it was so interesting. Having spent a few years before reporting very heavily on DeepMind for the Wall Street Journal, I knew that that company had been trying for a long time to break away from Google, and I knew that there were concerns by the founders of that company about governance, about control of AI, and they failed to protect this technology, in a simplistic summary of the situation. And I also noticed, gosh, a really similar thing happened at OpenAI. This organization started as a nonprofit because they believed that they wanted to create AI for the sake of humanity and the benefit of humanity only. And they even said in their launch, 'since our research is free from financial obligations, we can better focus on a positive human impact'. Very noble goals, but again, they couldn't manage to keep that structure in place and they ended up aligning themselves with Microsoft. And I just thought it was so interesting, like hey, these new products basically come out of this sci fi quest by people from both these companies to build this human level AI and do something really amazing for humanity. And in both cases, those quests just went completely off track and ended up enriching and extending the influence and wealth of these really huge companies.

Rebecca Buckman: [00:07:21] You mentioned the ethics angle of all this, and I think that's one of the main themes in the book, is how both these companies tried to set up. I think many of us weren't aware of this until the OpenAI boardroom drama happened. I thought, oh wait a minute, they have this nonprofit arm. So tell us about what happened there. Why did these things go off the rails? Was it just greed? Was it the influence of Microsoft and Google, respectively? What happened here?

Parmy Olson: [00:07:47] I think greed is definitely part of it. But that is what running a business is about. You have to make money and you have a fiduciary duty to your shareholders to do that. Of course you do. But it's not just that, it's also size. It's the fact that these companies are so huge. There's this almost gravitational pull that DeepMind, became part of Google and OpenAI, which really closely aligned itself with Microsoft, just couldn't resist. This is the first time in history that organizations of this size have existed. I don't think even the Roman Empire or any previous empires touched so many people as, say, Google can today. So with OpenAI, of course, they started off as a nonprofit. They had this nonprofit board that has legal control over what happens at the organization. At DeepMind, which is perhaps less known, initially when they sold to Google, when they agreed to sell to Google, the founders said, we'll only sell to you if you let us set up an ethics board. OpenAI didn't exist at the time, but they wanted very similar things that the nonprofit, this ethics board, would have legal control of AGI when they built it.

Parmy Olson: [00:08:57] Google said yes, but then Google turned into this conglomerate known as Alphabet, a couple of years after the acquisition. And they said, hey, you know what? Forget about the ethics board, we've got an even better opportunity for you. You guys can become an autonomous bet. Just like these other bets that we've got like Verily and Nest. And the founders of DeepMind really loved that idea. And they spent years talking to lawyers, drafting up legal documents to design this new organization they were going to become, which was going to be almost like a UN style non-governmental organization that was going to protect the AGI they were eventually going to build. And they got some pretty heavy-weight names to agree to be on this board. They reached out to people like Al Gore and Barack Obama. Cut to a few years ago and Google just shut the whole thing down and said, you're not spinning out. And they were brought more tightly into Google.

Keyana Corliss: [00:09:59] We touched on greed for a second, but they actually took less money than they could have. Was it Elon who was going to give them 800?

Parmy Olson: [00:10:13] You're right, Keyana, because actually there was an effort by Elon Musk to buy DeepMind. He wanted to buy in Tesla stock, but the founders said no. This is a really short part in the book, but that's right. And then Facebook and Mark Zuckerberg came along and offered them $800 million. And they were like, well, we want the ethics board. And Mark was like, no, I don't want a random board of people to have control of the technology. And so they actually did take a lower offer from Google, which was $650 million, so $150 million less, in order to have that agreement in place. In all of this, I don't see Machiavellian intentions by the people running this story, and there were a lot of noble intentions to start with. That this is just the system they were caught up in.

Keyana Corliss: [00:10:59] Well, what's $150 million between friends, you know what I mean?

Parmy Olson: [00:11:03] Yeah, exactly.

Rebecca Buckman: [00:11:04] What I thought this book really opened my eyes to, is ethics in AI, what it really means. You have one sub-storyline in the book about the two female researchers at Google. They were really focused on the underlying data that these AI models were being trained on. Maybe talk to us about that, why that's so important and are we paying enough attention to that today?

Parmy Olson: [00:11:29] I don't think we are, and I don't think we ever have been, or at least the big tech companies aren't. And that's not just my intuition or what I've read, but what I've heard from people who work at these companies who have these kinds of jobs. But I think it's important to point out that these two researchers were women, because you're going to go after a research topic perhaps that has a personal impact on you. We as women notice these moments of bias a little bit more perhaps than the opposite sex, and so fair enough, that was how things played out. There are two women in particular who I focus on in the book, Timnit Gebru and Margaret Mitchell, they were both AI ethics co-leads at Google. And their job was really just to do research that checked the way that Google was doing its own research into building AI models and make sure that they were ethically sound and that they weren't going to have these unintended consequences of harming people when they were put into deployment. And what they noticed before a lot of other people, was that Google was building large language models. Large language models are basically what ChatGPT is. And they noticed that these things were being trained on biased data, many more pictures of men than women, or data that correlated male jobs with men and certain jobs with women. And certain attributes with men, and certain maybe negative attributes with women, and they saw this as quite concerning.

Parmy Olson: [00:13:01] They wrote a paper about it. And that paper was incredibly prescient, because it also warned about the environmental costs of the massive amounts of computing power which were going to go into building these systems, which everybody's talking about now, by the way, the huge energy drain. They were talking about this several years ago. They also warned in this paper about misinformation and the fact that misinformation could be encoded in these models. Essentially, the paper was published, but there was a bit of a controversy about it, and both the researchers were fired from Google. The whole story is in the book, but it's led to a really big controversy, which I think is good. Perhaps not enough controversy, but some more discussion about the lack of ethics research in AI, particularly at big tech companies. Because these guys, they were a tiny team and they were often not only the only women in the room, but the only ethics researchers in a room full of product managers and people whose real priority was just making these things bigger and more powerful and more capable.

Keyana Corliss: [00:14:09] Becky and I have both been women in tech for our entire careers. I've been a woman in tech for my entire career, obviously. If you're not in tech, I don't think you realize how hard, as a woman, it is to stand up and call things out. Case in point they were both fired.

Parmy Olson: [00:14:24] When they stood up and called things out, they would get emails saying they were not being cooperative enough. You get things like that.

Keyana Corliss: [00:14:30] For me, it was, if they were risking their careers to call that out, this was a very serious discussion that needed to be had. I thought that was really interesting.

Rebecca Buckman: [00:14:43] I was just in a meeting this morning. We're talking about, every job function in America is now being pushed to adopt AI in the name of efficiency in the name of, we can do this with fewer people or we can do these tasks-

Keyana Corliss: [00:14:56] I don't know, I still have to fold my kids laundry. So as far as I'm concerned, AI has not come far enough.

Rebecca Buckman: [00:15:01] No, no. Or cooking dinner. That's still not happening, although we have DoorDash. For everyone now who is at least experimenting with ChatGPT and similar tools in their specific line of work, or being asked to do even more with it, how do we address that challenge of the fact that the underlying models may be fundamentally biased?

Parmy Olson: [00:15:23] How do we address that challenge? Well, first of all, I think we need to be aware of it, aware of the possibility that that can happen. As just consumers of the technology, if you use it, if you use a language model like ChatGPT or Claude or Perplexity, just be aware that maybe it's possible that some of the output from these systems won't be completely unbiased and just take it all with a pinch of salt. But more than anything, I think we just have to be careful about how these things get integrated into our processes. Whether it's legal firms using these to help themselves pull together portfolios of advice or consultants or banks, lending organizations, language models have this inherent vulnerability where they can be biased if they aren't trained on unbiased data, and most of the time they're not. There's been other research that's found that the bigger these language models get, the higher the risk that they can be biased.

Rebecca Buckman: [00:16:26] Which is counterintuitive because you'd think if they had less data, there'd be more problems. But more data means more problems.

Parmy Olson: [00:16:31] More data can mean more problems.

Keyana Corliss: [00:16:33] Mo' money, mo' problems.

Rebecca Buckman: [00:16:35] Right, exactly.

Parmy Olson: [00:16:38] And I haven't heard of a lot of research going into tackling the problem that the big effort right now is into creating models that have agentic properties so they can act as autonomous agents or that they can be more efficient so that they require less computing power. That in itself is a good direction to go in, sure, but I'd love to just see more attention being paid to the issue of bias and whether these things can entrench stereotypes in people.

Keyana Corliss: [00:17:05] This might be a tangential question, it might not be, and you write about this a lot in the book, that it almost seems like AI is at the mercy of these big tech companies. Where does that fit in to all of this?

Parmy Olson: [00:17:17] Yeah, that's a great point. You must have seen also, just in this past year, some of the most promising AI startups have been acqui-hired by large tech companies. Inflection was the first one that went to Microsoft. Adept was another one, that was started by a very well-known former OpenAI researcher who was, I think, tangentially involved in the invention of the transformer. Very, very promising startups. And then also Character AI was a big AI startup and that just recently went to Google. So you're seeing it everywhere. Any kind of company that is trying to build a foundation model or trying to build a new AI service, it's so expensive and they can't do it without the same kind of resources that big tech has. The only people who can do it are the big tech companies, really, and it does make me wonder what we're going to see happen to other companies like Mistral or Cohere or even Anthropic. Anthropic, also, broke away from OpenAI. They were started by people who split away because they were concerned about the safety of the open AI. They thought open AI became too commercially focused, but even they had to take money from Google and Amazon in order to actually build what they were trying to build. To build large language models.

Rebecca Buckman: [00:18:39] In many ways, though, it's an important point because it's a little different than these past huge technology shifts, which you and I have both covered as journalists. If you think about the rise of mobile or the rise of web 1.0, it used to be that you needed really expensive servers from Sun in order to build new technology, and then the costs started coming down, but I feel like this is kind of a unique situation, and we haven't really been here before. It's never been this expensive. Many of these companies, they don't even fit the venture capital model.

Keyana Corliss: [00:19:10] You also talked about Academia. It just doesn't have the money to do the research that is required.

Parmy Olson: [00:19:18] Yeah, that really blew my mind. Speaking to people who worked in Academia who ended up joining big tech firms because, one, the pay is so much better, and two, they can't even do the research at a university where you only get 16 GPUs. I spoke to one professor who was in that exact situation, and then she ended up joining Samsung and was able to get access to thousands of GPUs. How else are you going to do cutting edge research unless you have access to a supercomputer, and you can if you join these companies. There's been this real shift in the focus of AI research, even academic research. One, because many of these researchers now hold dual appointments. They'll be with a university, but they'll also have a post at Google or Microsoft or be some kind of resident fellow. And their research is increasingly steered towards the interests of these companies rather than independent research ideas. So I think that's important, because it means how even researchers measure success of AI is determined by the interests of these large companies. There's less independent research into things like the well-being of people, justice, inclusion, those kinds of values. And academic research in AI now is much more geared towards size and capability and power and accuracy. Which are all fine in their own way, but the research is completely focused on that.

Parmy Olson: [00:20:53] The broader issue with big tech, I'll just add, because you brought up big tech earlier, and the other point I wanted to make is that, the issue with big tech is that it's untouchable. These companies, they put these products out, and yes they're very convenient and we all use them, but we have no choice in the matter because you can't live your life without using this technology. There was this great piece by Kashmir Hill, who's a New York Times writer, and when she was with Gizmodo, she did a diary of trying to live her life for a week without big tech. And it was a complete nightmare. She couldn't do anything without using anything from Amazon, Microsoft or Google or Apple. It's not like we can vote with our feet, and yet these companies are working in a regulatory vacuum. They have no rules governing them.

Keyana Corliss: [00:21:47] Besides sharing this podcast, Becky and I have a bunch of things in common. For example, we both love a good train wreck of a PR story, and I'm assuming we both like a good glass of wine. Actually, this podcast was created over a drink. You know we both share that. And we're both members of a comms and brand marketing expert community called Mixing Board.

Rebecca Buckman: [00:22:06] Yes, and thank you very much to Mixing Board for working to raise the value of our industry and for producing this podcast.

Keyana Corliss: [00:22:13] Mixing Board has a very cool, savvy way of tapping the collective networks of their super connected community so that organizations can find the right senior comms and marketing talent fast. If you are hiring for a full time role, or trying to find the exact right consultant and want the expert guidance for an extremely reasonable price, I could not recommend Mixing Board's Talent Network more. The way it works is that Mixing Board shares the opportunity with the community-and this community is incredible, you guys. It is a who's who-and ask their members to submit candidates that they think would be perfect for the position. Most of the time, these folks are folks they've worked with or they've directly known for years, and they will quickly share back a super qualified list of candidates and make connections where there's interest. Go to the Talent Network page at mixingboard.com for more info and mention 'Just Checking In' for a special rate. The talent that comes out of Mixing Boards is incredible. It is a really great way to find top notch talent. So I encourage you guys all to go. For the record, if my ten year old daughter tried to write that piece, it would be the same. She also cannot live her life without big tech.

Rebecca Buckman: [00:23:22] The kids even worse because if there's no TikTok, then there's no life. But you bring up regulation, and that I think is another key point I want to discuss. You talked about the disparate amounts of funding available for these AI safety and ethics organizations compared to the research budgets for AI research at the big tech companies. What impact is this all going to have on regulation? You have some funny anecdotes in the book about how Sam Altman was able to apparently charm regulators in the United States, perhaps delaying some AI regulations for a while. How is this all going to play out? And will big tech be able to avoid this, or is there a reckoning coming?

Parmy Olson: [00:24:03] I don't really see a reckoning coming for them. I think they're going to have it quite good for at least a few years. There's a big 'I don't know' in my answer to that because we have the European Union's AI act coming and it's not really going to make an impact perhaps for another year or so. And we don't know what kind of impact it's going to have on startups. Is it going to make it harder for startups to innovate? That's a big concern in the tech industry, and among startups here in Europe especially, is it going to hinder them? Is it going to make it harder for them to compete against US companies? One thing I would say is, it is good to see policymakers trying to write regulation. So, for example, the AI safety bill in California, that's currently on Gavin Newsom's desk. I was just in the Bay area a couple of weeks ago, and I was asking people about it, and generally the sense is that Gavin Newsom is not going to sign this bill and it's going to be vetoed for various political reasons. But I think it's really promising to see policymakers putting their energy into this topic and trying to write legislation, even if there was so much blowback from the industry about this bill. I'm sure you followed it, so many technologists have said it's just complete rubbish and it's stupid. But it's really good to see those efforts at least happening. There's also things happening in the courts. There's a really promising court case being brought by city attorney's office in San Francisco against websites that distribute tools for making deepfake pornography. It's a big, big case, and they're the first that I know of, at least in the Western world, to be bringing a case like this. We don't just need to look to the federal government. There's stuff happening everywhere. It's the European Union and states and courts. So it's good to see that, and I hope we see a lot more in the next few years.

Keyana Corliss: [00:25:54] You mentioned you think they're going to have it good for a couple of years. I always cringe when I see some of these testimonies from big tech to lawmakers, because it is so clear the gap between knowledge, between the people asking the questions and the people answering them. And I know there's a million people in the background, but how much do you think it matters, that sort of knowledge gap? You've got people trying to regulate something that the smartest humans on the face of the Earth are working on. How much does that play a role into regulation and being able to regulate this in the right way?

Rebecca Buckman: [00:26:33] I think that's a great point. Who was the series of tubes guy, Ted Stevens? Remember the now deceased senator from Alaska, I think, or former senator. And he was like, what is the internet? A series of tubes. So you have that on the one side and then genius Sam Altman on the other. It's not a fair fight.

Keyana Corliss: [00:26:48] I always just think of Zuckerberg saying, Senator, we sell ads.

Rebecca Buckman: [00:26:52] Yes, that too. That was the other famous (one). Yes, good point.

Parmy Olson: [00:26:56] So cringeworthy. I think that's such a good question because what you hear a lot from people is, oh, people in government, they don't understand how the tech works. But I don't think you need to understand it that well. You don't have to be a computer scientist to be a good regulator or at least a lawmaker. If you're a lawmaker or if you're working for the FTC or the DOJ. There are people with those skills who work for those organizations to do all the research. But for regulators, it's almost kind of like a bell curve. And as long as you're in the mid 80% of that bell curve, and you're not one of those people who thinks the internet is a series of tubes, or you don't know that Facebook sells ads, you just have to be somewhere in the middle, and you have to be a really good policymaker and a regulator. I don't think we should just dismiss what lawmakers and regulators are trying to do because they're not technologists necessarily, and they do have smart people working for them.

Rebecca Buckman: [00:27:59] It's in the very latter part of the book, the whole OpenAI boardroom drama.

Keyana Corliss: [00:28:04] Want to know about this, because here's the thing, Becky and I love some drama, and there just haven't been enough train wrecks lately. People have been keeping their stuff together, and I don't care for it.

Rebecca Buckman: [00:28:14] And we like scoop, you know what I mean? We like messy situations that spill out into the public domain.

Keyana Corliss: [00:28:20] Yeah, I want some drama.

Rebecca Buckman: [00:28:22] Were you already writing the book when that spilled into the public domain?

Parmy Olson: [00:28:26] I had already handed the book in, and I was out at the pub with some of my mom friends and stumbled out really late in the evening and checked my phone. There was a BBC news alert that said Sam Altman had been fired, and I just thought, oh no, that's my weekend ruined. So I had to basically email my editor and just say, look, I really think we should just include this. Quite important. And so I wrote about 5000 words, I think, over the next few days just trying to incorporate everything. Or maybe actually what happened was, if you recall, it was all unfolding over the weekend, right?

Keyana Corliss: [00:29:03] It happened on, I think it was, a Friday night.

Parmy Olson: [00:29:05] That's right, it was a Friday night, yeah.

Keyana Corliss: [00:29:08] It was a Friday night. And then I think it was a little bit of a shit show for a couple of days. And then I think he was reinstated on Tuesday or Wednesday or something?

Rebecca Buckman: [00:29:18] Well, they were all going to join Microsoft, right? And then it switched back.

Parmy Olson: [00:29:21] Well they said that was the biggest bluff of all time, that they were going to join. No way were they going to join the khaki wearing, polo shirt wearing guys at Microsoft, the people at OpenAI, no. It all happened over just a few days, and then I think after that I just quickly wrote what I could. I have to say, since then, since handing in the book, I've spoken to more people who were involved in what happened. And I've learned a little bit more, and I feel like I have a better sense of what actually happened. And so I'm going to be writing a piece hopefully in the next few months for Bloomberg, like a next chapter that talks a little bit more about what happened.

Keyana Corliss: [00:30:00] Wait, so we don't get to hear it here? We have to wait?

Rebecca Buckman: [00:30:03] Can you tease us a little bit? What's the takeaway?

Parmy Olson: [00:30:05] Genuinely, I wish I could tell you more, but I've only heard this from two sources, so I just need to speak to more people to corroborate everything.

Keyana Corliss: [00:30:12] That's enough, right?

Parmy Olson: [00:30:13] No, it's not.

Rebecca Buckman: [00:30:14] At the Journal we only needed one. If it was a good source, they'd let you go with it.

Parmy Olson: [00:30:21] Yeah, it does depend on who the source is. That's true.

Keyana Corliss: [00:30:31] Becky and I actually both have the same question. Why do you think that OpenAI is so much better known in the US than DeepMind is? I can guess maybe it's the commercial aspect of it, but I will tell you, unless you really work in tech, you don't know who DeepMind is in the US. And you could be anyone in the US and you probably know what OpenAI is. Why do you think there's such a large gap?

Parmy Olson: [00:30:57] Well, I think a big one is DeepMind is not in the US. It's a British company. And I really think that counts for a lot, not being an American company.

Keyana Corliss: [00:31:09] We are pretty self-absorbed. I will say that.

Rebecca Buckman: [00:31:13] And Parmy does live in the UK. I don't know if we've made that clear.

Parmy Olson: [00:31:15] I do live in the UK, but I did move from the UK to the US in 2012. I was at Forbes and I was doing a lot of reporting on WhatsApp at the time, and my editors were like, why do we care about WhatsApp? I'm like, because there's 200 million people using this app. Oh, they're outside the US. Nobody cares because it's outside the US. I'm saying this as someone who has lived in the US, I have dual citizenship, I grew up in the States. But it's kind of true. Unless it's happening on US soil, it's out of sight, out of mind a little bit. Also, DeepMind, and Demis himself, is just not as much of an extrovert as Sam Altman. One, he doesn't make these big, bold statements that Sam does. Although, DeepMind did have some pretty big publicity wins, I guess, from a PR perspective. Like the AlphaGo tournament in China. I think the fact that they're in the UK, they're a research organization. There were very tightly controlled PR, you guys should do a whole other episode on DeepMind PR. It was so tightly controlled.

Keyana Corliss: [00:32:21] My PR would drive me insane. Can we just talk to someone? I'm more of the transparent school of thought. I mean, I was at Databricks. We love transparency.

Parmy Olson: [00:32:33] Yeah, not like that at all.

Keyana Corliss: [00:32:35] They were very Palantir-esque.

Rebecca Buckman: [00:32:38] But they also got swallowed by Google. OpenAI, even though they had the "strategic partnership" or alliance with Microsoft, DeepMind was moved into Google and that might have been part of it. I know we want to get to some journalism topics, but there's so much to discuss with the AI stuff. I also had one last question, which was, we all know that AI is transformative. At the same time, at a couple of times during the book, you compare AI, at least in its early stages, to the autocorrect feature in Word or on your phone, sort of implying some people, and maybe it was Geoffrey Hinton, the famous guy in the UK that told us there was, I don't know, a 30% chance that AI was going to kill off humanity in the next couple of decades, that people might be overstating, do I want to say the ability of AI, but just overstating how powerful AI is? Do you believe that's true?

Parmy Olson: [00:33:30] Yes and no. I think that what's happened in the last two years really does represent a step change for a new kind of tech revolution that I think you could compare to the revolutions around desktop, around mobile search, social media. I think this is really quite meaningful. And I do think that comes down to two big milestones. The invention of the transformer in 2017 at Google, which was just like this big step change for AI that made it possible to basically just exploit these new AI chips that were coming from companies like Nvidia. These GPUs made it possible to just take advantage of all that parallel processing power they had. And then there was ChatGPT itself and its generative AI model that has come out. This is AI systems that don't just process data but generate data. I think that's incredibly meaningful. These are systems that can hold a conversation with someone in the same way that a customer service agent would. And now Salesforce, just a few days ago, has announced this agent force. So now they're releasing AI agents that don't just chat with you, but also apparently will carry out actions for you and potentially book things or file a complaint for you. I just think that's nuts.

Keyana Corliss: [00:34:50] They still won't do my laundry though, so I am uninterested.

Parmy Olson: [00:34:54] But I think where the real hype is, is around this sense that the changes are going to happen now or in the next quarter. And I've heard AI company CEOs say that, which I think is ludicrous. I think that's not going to happen. It's going to take several years, I think, for these systems to make people more productive or have any kind of measurable return on investment, but I do think they are going to be pretty transformative for a lot of industries. Entertainment especially, the creative arts, education. But it will take time.

Rebecca Buckman: [00:35:30] So now we have to get to our other favorite topic, which is journalism. And this is my pet peeve. But Keyana and I talk about this all the time, the changing face of journalism and what we can do to save it.

Keyana Corliss: [00:35:43] We're having a podcast. That's how we're saving it.

Parmy Olson: [00:35:45] Thank you for saving us, yeah.

Keyana Corliss: [00:35:46] We started a podcast, we're like OpenAI for saving journalism.

Rebecca Buckman: [00:35:50] Yeah, we're trying new distribution.

Parmy Olson: [00:35:52] We need all the help we can get as journalists, trust me.

Rebecca Buckman: [00:35:55] We have to find different ways to distribute that content. How has your job as a journalist and as a storyteller, how has that evolved from the beginning of your career, and how are you feeling about the direction, broadly, that journalism is headed in.

Parmy Olson: [00:36:10] That's a really good question. So my job has changed primarily just because I've worked for different companies and I've had different roles. So when I was at Forbes, I had a little bit more creative license to write with my own voice particularly when I was writing for the web or my Forbes blog. It's a little different when you're writing for the magazine, you're a bit more tightly restricted. Wall Street Journal, so different. You have to write with the Wall Street Journal voice every time. And Bloomberg has gone back a little bit to writing with more flair. Obviously, because I'm a columnist now, I can write with my own voice again, which has been really fun. I think the biggest change I've noticed has just been the whole clickbait thing, and this inability to resist the lure of making sensationalist headlines. All editors do it because we know how important traffic is, and being slaves to traffic has been the big issue for many years. I haven't really seen generative AI and large language models have an impact where I am, because I work for legacy news organizations that are quite well established, and they're not going to plug any of these systems into what they do anytime soon. I'm interested to see smaller news organizations popping up that are using AI to write stories, and local news organizations and little web operations popping up. Nothing's really become big yet, but I think that's interesting. I don't know that it's necessarily going to displace the rest of us journalists. It might just create more noise. That's where I see generative AI going is that, maybe it won't displace as many people as we think. It will just create a lot more noise in our information ecosystem. We're just going to have to get better at filtering it out.

Keyana Corliss: [00:38:11] Hold on, there's going to be more noise? I don't know if I can handle more noise.

Parmy Olson: [00:38:16] We must have a limit. I don't know, that's just what my gut tells me is that we're just going to see more and more noise and shorter and shorter videos and shorter and shorter attention spans.

Keyana Corliss: [00:38:25] Okay, let's end this on a happy note. Tell us, what is your favorite story or thing that you have ever reported on? Do you have one?

Parmy Olson: [00:38:35] Yeah, that would definitely be when I wrote a profile on the founders of WhatsApp for the Forbes Billionaires issue, and I was really struggling to convince my editors that the founders of WhatsApp were probably paper billionaires because they had raised so much money. And then right before, and I was the first journalist to speak to them and get a big interview, and one night while I was in London, randomly-this is when I was living in San Francisco, but I was visiting London-I got a news alert saying that Facebook had bought WhatsApp for $19 billion. They hadn't told me that this was going to happen, the founders. So I just found out through the announcement. But when I started talking to them again, they gave me some more details I could use in the story, including a photograph of Jan Koum, the founder of WhatsApp, signing the papers for the Facebook deal on the door of the place where he used to get food stamps with his mom.

Keyana Corliss: [00:39:31] Oh, that's awesome.

Parmy Olson: [00:39:34] This is meaningful because a couple of months earlier, he had shown me the building, we'd gone for a walk around, and he had told me his whole life story of moving from Ukraine and it had been quite a difficult upbringing. And that was just an incredible moment. I had no idea that deal was going to happen. And not only had they, yes, been proven to be billionaires, finally to my editors.

Keyana Corliss: [00:39:56] "See? I told you so."

Parmy Olson: [00:39:58] But we had this lovely photo and it was a really, really fun story to pull together at the last minute.

Keyana Corliss: [00:40:05] That is awesome. Well, thank you so much for joining our little podcast. We really appreciate it.

Parmy Olson: [00:40:11] My pleasure, this was fun.

Keyana Corliss: [00:40:12] 'Supremacy' is out, go buy it, it is so good. And you don't have to be in comms or tech to really appreciate the stories. I think, Parmy, you did an awesome job explaining some of the really, really difficult concepts in there. I think everyone should read it, so go get it.

Rebecca Buckman: [00:40:30] Don't invest in another AI company until you read this book, that's my advice. All right. Thank you so much, best of luck.

Parmy Olson: [00:40:37] Thank you guys, thank you.

Keyana Corliss: [00:40:38] She's just kidding. She's not supposed to give investment advice.

Rebecca Buckman: [00:40:44] So if SBS comms sounds familiar, it's because you might have seen them recently listed as a top five most innovative company in Fast Company's first ever PR and Brand strategies category. They're one of the tech industry's hottest agencies that's attracted the attention of companies like American Express, Cloudflare, GitHub, Flexport and more. SBS comms embraces a modern ethos in technology comms, shedding outdated strategies for progress and results. SBS works across industries and with companies at all stages of growth, from industry leaders to those building tomorrow's consequential breakthroughs such as Air Company, Runway, Astro Forge, Versal, Light Matter and more. You can learn more about SBS at www.sbscomms.com, that's a lot of comms, or by checking out their very active Instagram account, where they post weekly roundups of media hits at @sbscomms. Just Checking In is produced by Astronomic Audio and underwritten by Mixing Board, a curated community of the most sought after communications and brand marketing leaders.

Keyana Corliss: [00:41:48] Thanks for listening to Just Checking In. Follow us at @keyanacorliss and @rebeccabuckman.

Transcript source: Provided by creator in RSS feed: download file
For the best experience, listen in Metacast app for iOS or Android
Open in Metacast
Checking In with Parmy Olson on AI, Chat GPT, and the Race that Will Change the World | Just Checking In podcast - Listen or read transcript on Metacast