Welcome to Tech Stuff, a production from iHeartRadio. Hey there, and welcome to tech Stuff. I'm your host, Jonathan Strickland. I'm an executive producer with iHeart Podcasts and How the Tech Are You. So I've got a whole bunch of shorter episodes that are focusing on individual, big stories that happen in twenty twenty three. I used to do these massive tech stuff episodes that spanned maybe two hours of content to talk about the big tech stories that happened
throughout the year. This year, instead, I'm doing individual, shorter episodes where I can go into a little more detail in those stories and still cover the big stuff that happened. So first up, early this year, I said that twenty twenty three was going to belong to AI in general and generative AI in particular, and I think it's pretty safe to say that being a solid prediction, it was
also an obvious one. I'm not going to pat myself on the back for that, but let's be clear, artificial intelligence as a discipline stretches back decades, right mid twentieth century and even earlier if you're talking about things like theory and generative AI has been around for quite some time too. Open Ai, which originally started off as a nonprofit organization dedicated to advancing artificial intelligence in an ethical and safe way, really shook things up in twenty twenty two.
That's when they released chat GPT chatbot that's built on top of the company's Large Language Model or LLM, so the chatbot draws from the LM, you can think of it that way. But this sparked essentially a race for second place, and you had companies like Google and Meta and Amazon, even Apple and more started to explore ways to develop and integrate generative AI tools. In some cases, that race ended up producing a lot of questionable decisions. Right in an effort to try and catch up, companies
were cutting quarters and launching products definitely before they were ready. Meanwhile, we've seen critics express concerns about generative AI in particular, and of course AI as a broader concept. You've got creators such as artists and writers who worry that AI companies are using works made by humans to train up AI models, all without first securing permission from the original
creators to do so. Ultimately, these generative AI tools can mimic real specific human creators, so you could tell a generative AI chatbot to write a story about a washed up author in New England who faces some supernatural threat with the added component of in the style of Stephen King, for example, But for a chatbot to be able to do that, it would first need to train on the works of Stephen King in order to grasp the elements of Stephen King's style and delivery in order to mimic it.
The creatives say that these AI companies are exploiting works of human effort and potentially they're making it much harder for genuine creatives to make a living off their art in the long run. That is not a small thing. And a lot of those works aren't necessarily available for just public consumption. They're locked behind paywalls of some sort. So how did the AI model get it? Those are
the sort of questions that these creators are asking. Generative AI, artificial intelligence in general, and automation have all played into a lot of discussions about companies replacing human staff with algorithms and chat pots and language models. And this isn't a hypothetical thing. It's not like, oh, in the future, we're going to start seeing AI displace human workers, which has been a fear for a very long time right.
But over the summer, IBM's CEO Arvin Krishna indicated that in addition to putting on a hiring freeze across the company, he was looking at the long term prospect of replacing thousands of jobs at IBM with AI, and he indicated that the first jobs that would be likely to go to the robots would be stuff like human resources positions. But throughout the year, we've seen two interconnected narratives play
out throughout multiple industries. First, due to the global economic situation, a lot of companies are scaling back significantly and they are laying off workers left, right, and center. We've seen it a lot in the tech space, but it's not the only industry to have this happen. Secondly, with the rise of generative AI, we've seen some company leaders experiment
with offloading tech asks to AI powered tools. Now, in ideal situations, the AI is meant to augment the work of human staffers, not replace them, but to make them more effective and more efficient, perhaps leading to things like a four day workweek. But in at least a few cases, including in the field of writing content specifically for the Internet. We've seen a few companies assign writing gigs to AI powered generative tools and to just eliminate the human element
almost entirely. Even my old employer, HowStuffWorks dot Com did that, and when the editorial staff raised concerns about the move, they were let go. Yikes, But there were other concerns as well. This year we heard a lot about an issue called hallucinations. Now that term is a bit whibly wobbly, as the time Lord would say, it leads to a potential alternative label call called confabulations, but either way, the
output is the same. Sometimes generative AI hits a gap in its knowledge, but it is still compelled to give an answer to a request, and like a stereotypical dad in an American sitcom, it appears to be incapable or unwilling to say you know what, I don't know that, and instead it just offers up information that sounds reliable but in fact it's totally made up. To understand why this happens, it helps to have a very basic, high level concept of how chatbots form sentences in the first place.
So deep down, a chatbot follows a statistical model to generate responses to queries. Based on the query, the chatbot evaluates numerous potential responses, and generally speaking, it picks the most likely word to come next in a sentence. If the model has access to real world knowledge to fill out its response, it'll favor the real world knowledge and
include that in the answer. But if it doesn't, well, it might just fill in the blanks with words that are it deems are most likely from a statistical point of view, to follow the previous words. The problem is that while the words might be correct from a statistical standpoint,
they may not actually reflect the truth. So, for example, according to Gizmodo, Google's own AI tool gave precise instructions on how to cook a poisonous mushroom with the scientific name amanita O Creata, also known as the Angel of Death or destroying Angel. So, in other words, this is a seriously toxic mushroom. So the query was asking Google to come up with a way to cook the mushroom safely, presumably meaning safe enough so that you could eat it afterward.
And Google's instruction included soaking the mushrooms in water in an effort to leach out the toxins. And it did say like you need to be super careful and it might take a long time to do this. But here's the problem. According to Gizmoto, the toxins in amanita okreata are not water soluble. It wouldn't work to soak the mushrooms. The poison wouldn't leach out in the first place, so the user would be left with mushrooms that were just as deadly as they were before you put them in
the water. The AI tool should have recognized this and simply responded with something along the lines of this mushroom contains deadly toxins with no shrefire method of removing them and should never be consumed. In story, it shouldn't have come up with maybe this would work, because people could die if they actually followed those directions. Now, some of the stories about AI were more about not the technology itself, but rather our approach to it and our point of
view of it. For example, Computer Weekly These Cliffs. Saren published a piece titled few organizations have a clear strategy for AI, and Saren cites a study by a company called mesh Ai Limited that said only fifteen percent of organizations have a clear strategy for integrating AI into their organization. Meanwhile, it seems like every organization is actually exploring AI to some degree. Obviously, not every organization is going to implement fully some half baked AI scheme, but some of them
certainly seem to be trying. And if there's one lesson we can take away from tech in general, it's that's raally a good idea to put a new, poorly understood technology to use. Still, there's a sense that if a company doesn't move in on AI soon, it's going to be left behind by its competitors. There's market pressure in place here that's at odds with the lack of a clear strategy, and the rise in interest in AI also fuels other parts of the tech industry. Specifically microchip MANUFAT
are rushing to meet demand. They're producing high performance processors that are best suited for certain AI implementations. In Nvidia is the main example here where most people know in Vidia as a graphics computer chip manufacturer that largely caters to gamers, but in Nvidia has really embraced making chips specifically designed to operate in AI implementations. And also companies that provide a lot of cloud computing functions are really
getting into the act too. They're also stepping in because it requires so much computing power to run advanced AI operations, and so we're seeing a big spike in demand for those types of tech solutions. I've got more to say about AI in general and generative AI in particular in the year twenty twenty three, but before we can get to that, we're going to take a quick break to thank our sponsors and we're back. So next up, I'd like to talk about the various stories around AI and
perceived emergent capabilities. So that would mean cases where AI seems to be able to do more than what it was designed to do, like the idea that AI is somehow learning or teaching itself things that should be beyond its capabilities. We've heard some folks express concern that AI is maybe smarter than we think it is and that this is going to lead to catastrophe. But we've also seen studies that say these concerns are based on faulty premises.
That early studies used a set of metrics that gave us inaccurate pictures of what AI isn't able to do. That because the metrics were designed a specific way, it was almost like cherry picking your evidence. It was finding things that seemed to support a particular hypothesis and ignoring things that were refuting that hypothesis, and that when you adjusted those metrics and you did the study again, those emergent behaviors turned out to be nothing of the sort.
It sounds like, at least for the moment, we're not having some sort of sky net situation. However, as we close out twenty twenty three, right now, there are news stories about self recursive AI models, that is, tools that can make changes to and in theory, improvements to themselves over time. The science fiction standard of an AI that improves itself in cycles that increase in frequency and then diminish in the amount of time it takes to do
them is one that comes to mind. Right if you've got an AI that's able to improve itself and presumably do so at a level that's at least comparable to humans, if not better than what humans can do, then you could get into this situation where it's making these changes and improvements in cycles that are happening faster and faster, and you have a runaway train on your hands. These are the scenarios that come to mind when people cite things like the tech singularity or perhaps even a potential
existential crisis for humanity. I think it's still largely science fiction. I don't think it's something that we need to necessarily concern ourselves about in real time. But it is one of those things that reinforces this fear, uncertainty, and doubt or fud about artificial intelligence. Now for a few specific stories that happened throughout the year, Judge Beryl Howell ruled over the summer that AI created works are not eligible
for copyright. Judge Howell determine only works from human authors can be copyrighted, which is huge, right, because if you're using AI to generate all your content but you can't copyright that content, you might not be in as strong a situation as you think you are. One the content may not be very good, and two you have no
ownership over it. Right, you have no way to protect yourself if someone just lifts your content and uses it somewhere else because you cannot copyright it do to the fact that it was not a work from human authorship. Over in California, a group of artists brought a lawsuit against the company's mid Journey, Stability AI, and devant Art.
They were making the case that these companies misused the artist's own copyrighted works while they were training up their own AI models, and the judge in that case dismiss charges against both mid Journey and Deviant Art because they were using tools made by the third defendant in the case, Stability AI. The judge did indicate that the plaintiffs could file an amended complaint and include the other two companies if they amended the complaint so that it was relevant.
And out of the three artists who were part of the lawsuit originally, only one had her claims really make it out of all the dismissals, because it turns out the other two had not copyrighted their works. The copyright infringement only works against the person who did take the time to copyright their works. It remains unclear how the court will rule if training a generative AI model on an artist's work without their permission amounts to copyright infringement,
but we'll have to keep our eyes on that. In the following year, and Open Ai was in the news an awful lot. This year, the company unveiled GPT four, which is the latest version of their large language model. They started taking on enterprise clients companies that want to tap into the power of that language model to do various things. Chat gpt got access to current events. That was a big deal. When it first launched, chat gpt could not access any information that came after September twenty
twenty one. That's as far up as it could access info. However, now it has access to the Internet, so it can whole from current events. Another big ongoing story was how open AI's CEO, which was Sam Altman, for all but a couple of days this year. More on that in
just a second. He met with various leaders and regulators around the world, and the purpose of those meetings was to discuss potential regulations for AI, because obviously a lot of legislators have concerns about artificial intelligence, So how can we allow for the continuation of development so that, say, the United States doesn't fall behind other countries while also
preventing potential disaster. Now, clearly Altman has a vested interest in the outcome of these discussions, and in fact, some critics worried that Altman's suggestions were really calculated to just make it harder for smaller AI startups to catch up to open ai and thus give the leader in the field even more advantages. And Altman, according to these critics, wasn't trying to make ai safer, but he would trying
to slow down the competition. This brings us to the massive story of Sam Altman being unceremoniously fired by the board of directors, only to be welcomed back to the company literally days later. It's been quite the roller coaster ride. So Altman had appeared at open AI's very first developers conference. He had made several high profile announcements about the direction and future of open AI's products, like its large language
models and its chatbot. And then not long after he finished up at this developers conference, he gets a message and he has to attend a zoom call with the Board of directors, and that's what he finds out. Bam, He's been fired. Now, that last story has a lot to do with the gap between the original vision of what open ai was supposed to be and what it
actually has become now. As I said at the beginning of this episode, open ai started off as a nonprofit organization dedicated to developing useful, ethical, and safe artificial intelligence, but AI is really expensive, and in an effort to fund the operation and not just constantly be begging for funding from various parties, Sam Altman created a for profit arm of open Ai, and since then the company has made some very aggressive moves in the artificial intelligence space,
sometimes with Altman issuing statements that made it seem like even he thought it might be a bit much and a bit too aggressive. And yet the aggressive moves kept on coming, and it reached a point where the board of directors, who were people who were originally part of the nonprofit open Ai version, were really concerned enough to
relieve Altman of his job. But the backlash following that move prompted a near total shakedown of the board and Altman is back in the driver's seat because whether the concerns are relevant or not, you had parties like Microsoft, which has dedicated ten billion dollars in investments to open ai over the near future, and without consulting Microsoft first before firing the CEO, it really shook out the Apple carts.
So we see that commerce can overpower concern right. I think it's safe to say that every year from here on out is going to be AI's year for good and for bad, and twenty twenty three certainly qualified. AI was part of stories that were even outside the world of technology. Played a part in Hollywood negotiations, as both the WGA and sag AFTRA when on strike. Both unions express concerns about AI's role and entertainment moving forward. So I expect we'll see lots more stories in that vein
as we move forward. But that's an overview of AI and generative AI in twenty twenty three. We'll be back with more short episodes about big tech news stories throughout the year over the next few days. I hope you're all well, and I'll talk to you again really soon. Tech Stuff is an iHeartRadio production. For more podcasts from iHeartRadio, visit the iHeartRadio app, Apple Podcasts, or wherever you listen to your favorite shows.