From Bloomberg News and iHeartRadio. It's the big take. I'm Westkasova. Today chat GPT keeps getting better, but is that a good thing? Chat GPT is what's called a large language model. It has the ability to generate human like responses to a wide range of questions and prompts, and its capabilities have been praised by many as a major breakthrough in the field of artificial intelligence. However, there are also those who have raised concerns about the potential risks and drawbacks
of chat GPT. Some worry that it could be used to spread misinformation or propaganda, or that it could be used to manipulate public opinion in dangerous ways. It's Ava, Oh hey, Ava, what's up? You should tell listeners you had help with that intro. Hey, I was just getting to that. Usually I write the intros to the show, but today everything I said just now was written by chat GPT. I asked the bot to write an intro for a podcast episode about chat GBT, and that's what
it spit out. And to be honest, it kind of freaked me out a little bit. I mean, it's a little stiff, not as sparkling as my usual prose if I do say so, but it still made me fear for my job. But Ava, yes, Wes, you should tell listeners that you're not a person, are you. Well, you've got me there, Ava, as you might have guessed, is herself an AI generated voice, and that technology too, is
getting closer and closer to sounding like us. The point, of course, is that it's becoming a lot harder these days to tell what's real and not and what's a person and what's a machine. Fortunately, we have some real, live humans here to dig into all these questions. Bloomberg Tech reporters Dina Bass in Seattle and Rachel Mets in San Francisco, and Permi Olson, a Bloomberg opinion columnist in London,
they've all been covering the rapid rise of chat GPT. Rachel, We're all hearing a lot about chat GPT and the company open Ai that created it. Microsoft is using open AIS technology and its products, and other companies like Google are scrambling to catch up. Can you tell us what is chat GPT and how does it fit into the world of artificial intelligence or AI. Well, it's most basic. Chat GPT is a chat bought you ask it a question you are asking it something and it's going to
give you something back. We've seen that before, right, either you talk to something like a XA or you can type to like a customer service chatpod on a website. But this is different in that it's generative AI. So it's not working quite the same way as the chat boss of your so to speak, and it's in partless because it's been trained on just a massive amount of data from the Internet. When you say generative AI, what
do you mean by that? Basically you're giving it some input, which might be a question or command like write me a poem about Hello Kitty, and it's going to give you a response, and it's not pulling it from a database or something like that. It's sort of coming up with it wholly anew and each which also means each time you ask that kind of question, you'll get a different answer, and they might be a little weird. They might sound very factually correct but be like a little
bit skewed. But in a lot of cases it's actually interesting information that could be usefuldina, Where is the bot actually drawing this information from in order to form an answer that reads like a person rot it? So basically, what chat gypt is doing is drawing from the whole Internet. Open AI basically scan large volumes of Internet content from all sorts of places, including social media, Reddit, various web pages, Wikipedia, and as a result, it gets a lot of different
points of view. It uses that text in an artificial intelligence model that it's created, and it draws on what it is quote unquote learned but not really to create sort of a mimicry of human speech that it learned from how people talk on the Internet. You can see immediately what the benefits of that and the problems of that might be. So we're getting really interesting things like people who asked it to generate Seinfeld scripts or cocktail recipes.
But we also get misinformation, things that are incorrect, things that can be abusive or not very nice, things that can be creepy. One of the problems is when you get something that's wrong, chat ept tends to state it with the utmost confidence, so it's not going to flag to you that this might not be right. This is something that students that seized upon chat GPT immediately for
writing papers found out to their apparel party. As Dina was saying, we're seeing all these instances from students to companies starting to use chat GPT and starting to see this as a way of assimilating a large amount of information that isn't just coming up with funny poems. What are some of the ways that business is already getting their hands on this. Yeah, it's We're in a really interesting moment for how we use these models, and I
would call it a sort of honeymoon period. So when chat CHPT was first announced a few months ago, there were lots of screenshots shared on Twitter of people using it in fun ways, right, how to remove a peanut butter sand which from a VCR in the style of the King James Bible. That was my favorite out of all of them. But after that is now the company's trying to figure out how do we make money from this? So just recently open ai released its latest, much more
powerful model called GPT four. If you think of the chatbot as being like a car, these models that Dina described, they're like the engine, and each engine is getting more powerful, allowing the car to go faster and do more things. And this new GPT four is more accurate, according to open Ai, much more humanlike in its responses and in the announcement, open ai gave some examples of companies that are actually using GPT four. One example was Morgan Stanley.
So Morgan Stanley has been using GPT four since last year, and they said that about two hundred of their wealth management staff had been using a chatbot, their own proprietary chatbot that had been trained on thousands of papers written by their analysts to come up with a model using the GPT for model that could just give them answers more quickly than they would get if they had to
read that research themselves. And so they're saying that what the advisors would take thirty minutes to do, they can now do in seconds. So the value here is that essentially companies can ring out more productivity from their workers than they did before. So, you know, there's a lot of talk about these systems replacing professional jobs. I don't think they're actually going to do that. I think they're just going to make professional workers have to do more
in less time. So PARMI, I want to jump in with one example and one question for you. Rachel and I were talking to Greg Bachran, who's the president of open Ai. He mentioned another use case similar to Morgan Stanley summarizing all of these papers, which was the US tax Code. He flagged for US that he thought GPT four was super useful for summarizing the very complicated US tax code and telling you what policy, what exemption is
relevant for you. Probably would have been better if we had that a month ago before people started filing, But there we go. But I wanted to ask you on your point about replacing jobs versus increasing productivity. I hear the same thing from Microsoft, which is pushing this. It'll make you more productive, it'll free you from all the groundwork. But aren't there people at entry level jobs that do that sort of thing, that do the summarizing for Morgan Stanley?
Aren't there paralegals computer programming as well? Aren't there entry level jobs that may get replaced if we can't figure out how to upscale those people. I think that's a good possibility. But that's why I say large swaths. I don't think there's going to be large swaths of job losses, but there will be job losses. One hundred percent people will be replaced by this. But it kind of makes
me think of the translation industry. So professional translators five ten years ago we're really worried about being replaced by Google Translate and Deepole and these other AI translation software tools. But in the end, there weren't a lot of job losses in the translate industry. They were just expected to translate more words, so instead of translating two thousand words in a day, they were expected to translate four thousand
words in a day. And I was speaking to some people in the industry just recently because I thought that this was quite a useful parallel to figure out where we might be going with this more broadly, and there weren't a lot of job losses that they knew of,
just anecdotally. So I think there will be some, and I think you're exactly right, Dina, that it will be these kind of entry level roles, and I think people are just going to have to rethink what people do in those kinds of roles when they do come in as interns or kind of junior workers. Rachel Permit has described the way some of these companies are using GPT for important functions, things that customers would rely on critical information.
But as Dina said earlier, the bat will spit out an answer with supreme confidence, and what we've learned is that sometimes it's just wrong, Like if it doesn't know the truth, it'll just kind of make it up. And isn't that a big problem if you're using it for business and you can't tell whether what the bot is spitting out is real or not. Yeah, I think that's a huge concern, And for me personally, I would be really reluctant to trust it at this point with any
really important applications in my life, like personal finances. I would personally want to know if my bank is using it. I mean, maybe I'm like a little nerdier than the average person, but I'd want to know as specifically as what model are you using to get this information from? How much information can you give me about the AI model that you're using, an AI company service that you're using, Just to get a handle on like what might be leading to the answers. We have to remember that these
are essentially statistical software. I mean, all the word generation, I should say, is being generated statistically. It's not like a machine that's pulling from two different pockets. One is here answers to things that I totally know, so I'll give you an answer from here, versus I don't know, so I'm going to make it up over here. It's just it's really a jumble based on what it has been trained on and what it thinks is likely to be.
You know what you want as an answer based on the input you gave it, and that Yeah, that should make people nervous if it's being used to do things like deal with their personal money of their company's money. GPD four. Technically it's what you would call a multimodal model,
which means it's not just generating text. It can also do things like computer vision tasks, which you can't do that right now, in like the ai playground that Opening Eye has available to the average user, Like here's a picture,
tell me what the caption could be. We couldn't do that, and we should say this was the demo that open ai did when it released GPT for the new version, right because that was one of the really interesting parts of the demo was that you could point your camera on your phone camera at the inside of your fridge and chat. GPT will now say what recipes you can
make based on the contents of your fridge. Okay, but let's keep in mind those recipes might be terrible yeah, you know, it's like or like, what cocktails should I make? There's an app that they're suggesting for people with blindness or low vision as well. It's not just cocktails. Yeah, I know. I've known about that app for a couple of years, and it's you know, I don't know if they were using machine vision recently or if they've always used humans. You know, for a lot of that, they've
just had to use humans to help. But it's a super cool app. So how does that app work? In the little bit that I saw which I thought was really interesting and fingers crossed that it actually can work well and be helpful for people. Essentially, what you would do with the app, if you are a blind or low vision you might take a picture of, say two slightly different shirts that you've laid out on your bed.
Maybe one is blue with polka dots and the other is green with polka dots, let's say, and you could send in the picture and say which is the blue one question mark and it could tell you, you know, would analyze the image, and it would analyze your text, and it could give you an answer. And that kind of thing could be super useful and really just like a speedy way to improve the day of somebody who has a hard time doing that kind of task that
a lot of us think is super simple. You know, we don't even think about it, but that's hard for somebody who can't see that well. So the consumer applications that we're talking about CHAT GPT, the Microsoft being Search, which is based on the new version of GPTGBT four, so that's a web search, and the thing Morgan Stanley is doing, or Microsoft and other companies are now putting these kinds of artificial intelligence capabilities in their business products.
So the business products are learning from a more confined set of information. They have the ability to learn certain things from the Internet, but they're learning from the company's data. So in the Morgan Stanley case, it's the Morgan Stanley report that analysts right that they would normally send to
their financial advisors. In the case of Microsoft's customer service software or software for salespeople, it's learning from each company's database and set of information and set of interactions with customers. That's a more confined set of things and less the entire universe of the Internet, so you have a little bit less potential for it to go awry or to do something weird, or to get something completely wrong. But the stakes are higher. In a monetary material company application,
there's some pretty high stakes. So I asked Microsoft CEO Satanadella this question when they unveiled some of the business tools, and his comment to me was, well, the humans make mistakes too, so we need to get to a point where you know, the mistakes from the machine are lesser, but it is something to be aware of. The mistakes in the consumer chat are different also because at least in some of the applications. So if you take, for example, the bing search engine that uses the open aie product,
there are sources. Is what Microsoft has done to help people, and to flag where the information is coming from is to cite the way a high school or would in a term paper, and those links are clickable. You can go in and see this piece of information is coming from this new source, and maybe I don't trust it, or I'm reading the article and actually the article doesn't
say what Being says it does. I had that happen a few times when I was asking Being about the US shootdowns of the unidentified flying objects being kept getting parts of it wrong, and I could see where the error was coming from in the article. The concern people have, and it's a valid one, is will everyone do that or will they just look at the answer and while the answer is wrong and they're not going to click
into the sources. I just think that humans by and large have this tendency to believe the algorithm more than they do other humans. So we put a lot of trust in machines collectively, which is why I think Sacha Adela's comment to you rings a little bit hollow. The bar for machine truthfulness is really high, and one of the things that makes this quite complex for people who use the systems is that we don't know how often
GPT four or chat GPT actually makes mistakes. AI and computer scientists have been referring to this as it's hallucination rate. I've asked open AI multiple times, what's the hallucination rate for GPT three point five. They won't say. The only thing they did say about GPT four was that its hallucination rate was forty percent more likely to produce factual
responses than GPT three point five. Okay, that's great, so it's a little bit more factually, but we don't even know what the baseline was so it doesn't really tell us anything. And the problem with AI systems generally is it the only way to really know how well they'll perform is when you release them into the wild. So we are all effectively the guinea pigs for this stuff. When these things get things wrong, that's how the designers
of these will know. But we also have to pay the price, and we're not even sure what that price will be. You mentioned a couple of times this term hallucin nation rate. That's a great term. What exactly does that mean? Just confidently presenting as fact something that is not factual or flu and tagwash. That's the other one I've heard. When we come back my confert, you know what, Eva,
why don't you take it from here? More from Parmi, Rachel and Dina After the break Parmy, we talked about some of the businesses that are using chat GPT and one of the people getting involved is Elon Musk, Because of course, what exactly is he trying to do and how serious a venture is this, because we all know that Elon Musk likes Elon Musk's name to be in
the news. Yeah, and there's a lot of different shiny objects that Elon Musk likes to go off, and the latest one does appear to be AI and language models multimodal models. So there was a report in the Information recently that said that he was reaching out to artificial intelligence researchers and trying to form a new research lab and the goal was to build this alternative to chat GPT. Musk does have a storied history of involvement in AI development.
He was an early backer of deep Mind, which is the AI lab that was eventually bought by Google. He is one of the co founders of open ai, which is deep Mind's rival, so he really has his fingers and a lot of very important pies in AI, and in recent years he has grown a little bit disenchanted with how these companies have pursued artificial intelligence research. He has complained that open ai was training AI to be woke.
So we don't know what's going on inside his head, but it may well be that he wants to try and build an alternative language model chatbot system that isn't constrained by the same kinds of content filters and policies that open ai has on its chatbot DNA. I think when most people hear about AI or a chat GPT, they think they're the same thing. But there's a lot of AI products out there now with different uses. Can you give us like three examples of AI product categories,
what their products do and what they're used for. It's interesting is GPT, even though we've talked about it as language, spawned a bunch of different things. So we've talked a lot about chatbots. It spawned a bunch of chatbots. Open AI when they were using it, also discovered it's really good at computer programming, so that's also another kind of
language generation, but a completely different language. They were using it and they found that it did a very serviceable job with computer code, so they created their own version of a computer coding tool that they call codec. Microsoft then reformulated that further into something called Copilot, which maybe in one of the most widely used corporate applications of
this kind of AI. Already, there's a fair number of computer programmers that are using it, at least for some of those sort of wrote programming tasks, things that you know, we're kind of bother someone annoying to do. It also spawned image generation, so earlier last year before Chat GPT, everybody was fixated on Open Aiyes, Dolly, that also came out of the GPT model, and that's a product where you can type in a couple of words of text
and it will generate a picture for you. So if you want a grilled cheese clock in the style of Picasso, it will make that for you. Artists have been very unhappy with that and the copycats of it, you know, other models that do that because artists are worried that it's using their artworks without permission. And you know, various of these models are using artists artworks without permission in order to generate a new art. The other thing that
we're talking about before is search. This chat bought function and its ability to answer questions, whether it's doing it correctly or incorrectly, has reinvigorated the search battle. Google has been so dominant in search for so long, and Microsoft's was essentially, you know, even though it has a couple percentage points of share, pretty dead in the water in terms of the competitive battle. People have been wondering for years whether Microsoft would sell it or spin it off.
That competitive battle is so dead that when Microsoft announced this new AI powered bing I couldn't even find a research analyst that could give me market share data on this because everybody had pulled their analysts off of covering this market. But now all of a sudden, search seems
like it's potentially an open category again. Even if we're not thinking Google's going to lose its dominant position, Google's panicked enough that they try to kind of rush out their own reaction to the being bought, which they're calling barred, And even though they announced it a couple days before Microsoft's event, it's still not really publicly available. We don't
really have any sense of what it looks like. Yeah, one of the concerns about using these systems for search engines goes back to what we were discussing earlier about we don't know how often these systems make factual mistakes, and search engines you rely on them to give you facts.
So one of the arguments that some computer scientists are making now is that these systems, as magical and remarkable as they are, and human like as they are, are just not fit for purpose when it comes to search engines. Because these are tools that are going to be used
by millions, hundreds of millions, potentially billions of people. You cannot monitor everything that these systems are saying at that scale, and so, you know, could we be facing another misinformation epidemic When you think about people who are easily persuaded or kids using these systems and they're taking everything that these tools are telling them at face value, that could really be a problem. And you know, it's funny that for years we've had a misinformation problem from social media.
Maybe now we could be entering an age of misinformation by algorithm, because, as Dina was saying earlier, Google was caught on the back foot. It's racing to get out this bot. As Bloomberg News is reported, they're trying to put generative AI into all their products, rushing to unleash these tools to the public. But are all the proper safeguards really there. Yeah, that's something that I actually have
been thinking about a lot lately. This speed to market makes me think about, like, can you think of another time in like the last twenty years or so when a tech company put out a product, and like the iPhone, for instance, when the iPhone first came out, it wasn't like here's a thing that can do a bunch of things like moderately, well, you know, a decent amount of the time, but like, let us know, you know, how it does, and we'll improve it over time, and it's
going to be a okay. No, it had a very limited set of features. It didn't even have an app store. You know, I don't think it even had a flashlight at that point. And you know it can make phone calls and do a couple other things, get on the internet. You weren't expected to be the beta tester of the product and help improve it as it went. And I feel like it's really interesting that consumers are at this
point expected to do that. And I'm wondering if consumers are going to get wary and or weary of that, and if so, like how long that's going to take. It just seems like a lot to ask of somebody, doesn't it, I mean, Rachel, One of the big problems is that, as Diana was saying, GBT can generate a photo that looks real, this idea of things that are fake but look very very real. How does it deal with that problem of fakery. We've heard about deep fakes,
but this is like a whole different level of that. Yeah, and the fidelity of these images is getting better and furthermore, there are other AI programs you can also use to, like, in a sense, upscale the fidelity of an image. I've played around with those a little bit and they're kind they're pretty remarkable, you know that for really good things like I have a sort of fuzzy old picture of a family member and I want to make it look crisper.
But you could also use it to make something that's fake look more realistic, or you could do it directly with one of these programs. I think it's just going to be tricky, and it might be one of those things kind of like computer viruses and infosec, where there's just a constant sort of battle to figure out what's real versus what's fake and stay ahead of it, and then it'll catch up, and then people will try to get ahead of it. It might continue a pace for
a while. These are very good tools for creating misinformation and disinformation at scale. One of the things we're seeing, and you often see this in technology, often the first at scale use for some new technology, as pornography. You're already seeing that with these kinds of tools and deep fake pornography where celebrities heads are being put on other people's bodies doing things that the celebrities did not do. Social media companies are happy to deal with removing those
removing fake voices. One of the big questions for the companies creating these AI tools is what can they do to clearly flag in watermarks and images or some sort of digital flag in text that something is AI generated so that people are aware of what they're dealing with. There hasn't been much success yet doing that in a reliable way, because, as Rachel says, people can always outsmart those and you're playing a bit of a game of whac amole. We'll be right back party is you said earlier.
We've seen lots of different versions of AI come along. How much of this is I don't quite want to call it a fat but so the latest thing in how useful is this to build something even better in the future. It's definitely not a fat. This is something that people are already using. Chat GPT is the fastest growing consumer internet tool in history. More than one hundred million people signed up for it just a few months, and they're not just playing around with it, they're actually
using it for work. So I think it's going to fundamentally change a lot of things in terms of the way professional workers work in all sorts of industries. You know, not to be a Debbie downer, but I think this is also going to have quite a negative impact on the creative industry. And I think you know, when you think about writing or generating images, these systems they're not inherently creative, but they can do things that creative people do.
And I have heard from professionals who are using this tool to for example, generative video script and the reason they do that, they say to me, is it removes friction from the creative process and it solves the blank
page syndrome problem. But you know, if you think about the creative process throughout human history, that's what brings value to artwork, and that's what brings value to literature is the work that a human being puts into looking for that word that they couldn't think of when their mind is a vacuum. And I think it's a little bit sad that we might lose some of that when machines
come in and solve that problem for us. It's much harder to quantify and track that kind of impact, but it is going to happen, and it just leaves me a little bit melancholy. Dean. When you look ahead with the kind of aid that we've been talking about it today, what are the next things that you think we should be looking for. We should keep looking for further penetration into corporate use cases and whether these things really work
for talking about is this a fat or not? It's not a fat, but we have to see how useful it actually is for people and where it runs aground or where it doesn't quite work technologically. I think we're going to see in the short term some models for video creation. We've talked about image creation, but there are companies working on video creation as well, and we're going to see regulation. That's one of the things we have talked about. The European Union is looking at how to
regulate this. States in the US are looking at how to regulate Congresses looked at it, but always seems to be moving a little bit more slowly. Some companies want regulation, but only as far as they want, not beyond what they want to see regulated. They want to sort of determine how far it goes and no further. Those are going to be issues. There's discussion when We've written about it here at Bloomberg about the climate impacts and the
carbon impact. One of the things we've talked a lot at party mentioned how much better these models are getting. The reason they're getting better is because they're getting bigger and bigger, and they're running on ever more powerful supercomputers inside of cloud data centers that are run by Microsoft and Google and Amazon, and the carbon output of those
things is not insignificant. So there's a little bit of a call for look, people need to actually reckon with what the costs of these things are and whether the use cases are worth the carbon output. Some of the things we've been talking about are iterating getting better at what you can already do, but there's some unexpected things that we're not even thinking about that this technology may suddenly be used for using AI to spot and understand
certain medical conditions. Cancer has been one that's been talked about a lot. I think that that is actually a great use of the technology. One thing that I hope that we'll see more of is actually data sets that are more varied and better ways that companies come up with to fix a lot of the problems that they have with societal biases. That pop up all over the place when you're using these machines, because you can ask a question and get gender biases, for instance, very easily
and matter of factly in the responses. But unless you go another step and sort of interrogate the model, like why did you give me this answer, it's not gonna really give you much more about why it gave you that kind of answer. A lot of people will just accept its answer as fact. This happened to me a
couple of times. I was asking for nicknames for little kids, like for toddlers for girls and the nicknames for boys, and GPG four gave me some really like sort of stereotypical, you know, like Rascal and Champ kind of names for boys, and you know, sweet pea, jellybean kind of names for girls. I didn't say why did you give me these biased names? I just said, why did you suggest X, Y, and Z for a boy and A B and C for
a girl? And then it said, ah, those were you know, those were biased, and here are some more neutral names to make what Rachel's saying a really concrete on the potential negative impact of bias. Rachel was talking about, you know, cancer applications and it can work if they get it working across all populations. So here's the very specific problem. You know, take breast cancer, because people are looking at ways to scan mamograms for breast cancer more skin cancer.
So both of those conditions can present differently in people of color. So the way breast cancer can look in a black person who has breasts can be different. And if your data set is insufficiently diverse, if it is trained mostly on people who are white, people who are Western Northern European, it may not pick up those breastcancers
and people with darker skin. And that's very dangerous because you wind up switching a human scanner for breast cancer for an AI one that is insufficiently prepared to find breastcancer in all people. Literally people's lives are at if you do that wrong. This technology is changing so rapidly. It was only a few months ago that it was even released out into the public and we all became aware of it as it's happening right before our eyes
and we all have access to it. What should normal people do when they're approaching head How should they use it? What should they be careful about? Like, what are good rules of the road in using this technology. Check everything. Yeah, I would approach it cautiously, think about how much do you want to trust it with personal data. You may
want to not give these systems personal information. I wouldn't at this point, and definitely look with some skepticism at the results you get and feel free, especially if it's a chat about specifically, ask follow up questions. You know, whether or not you think the information is accurate that you're getting from it. You can always type something like why did you give me that? Why is that the
right answer? And especially with like GPT four, it'll text back you know, here is why I think that, and then you might say, wow, this is completely off base. It did this to me with a few different things. So then you go, okay, I'm not going to trust you for this task, or it might give you some really good reasoning that you can then just double check and say, Okay, this was actually really helpful to me, but definitely check everything. Harmy, Rachel Dina, thanks for coming
on the show. Thank you, thank you, thanks thanks for listening to us. Here at the Big Take. It's a daily podcast from Bloomberg and iHeartRadio. For more shows from iHeartRadio, visit the iHeartRadio app, Apple podcast or wherever you listen, and we'd look to hear from you emails with questions or comments at Big Take at Bloomberg dot Net. The supervising producer of The Big Take is Vicky Vergolina. Our senior producer is Katherine Fink. Our producers are Michael Fallero
and Mobarrow. Raphael mci lee is our engineer. Original music by Leosidrin and I'm Ava, a voice avatar created by Well Said Labs, and I'm West Kasova. We'll be back tomorrow with another Big Take