On Creepy Fingers, Deep Fakes, and Fair Use with Getty Images CEO Craig Peters - podcast episode cover

On Creepy Fingers, Deep Fakes, and Fair Use with Getty Images CEO Craig Peters

Jul 01, 202452 min
--:--
--:--
Listen in podcast apps:

Episode description

Hands with six fingers, mouths with dozens of teeth, hairlines and limbs out of whack: We’ve all seen eye-roll worthy generative AI images. But despite the prevalence of these easy to spot fakes, photography and video media companies like Getty Images are already feeling the impact of AI and trying to integrate the technology without compromising their core business. Kara speaks with Getty CEO Craig Peters about why he can promise users of the Getty AI Generator “uncapped indemnity”, whether he thinks licensing agreements with OpenAI and similar AI companies are “Faustian” deals with the devil, and how better standards to protect visual truth and authenticity could help the industry remain financially viable in the long run. Plus: how worried should we be about deep fakes impacting the 2024 election? Questions? Comments? Email us at [email protected] or find Kara on Instagram/Threads as @karaswisher Learn more about your ad choices. Visit podcastchoices.com/adchoices

Transcript

Support for the show comes from Farragamo A century ago, a shoemaker in southern Italy had a vision to marry abstract art with high quality craftsmanship, and that led them to create beautiful, timeless footwear. Today, the name Farragamo is synonymous with luxurious shoes and accessories, and for more than a decade, Farragamo footwear has been made with an ethical

supply chain and sustainable footprint. Their classic ballerinas or contemporary moccasins are a perfect blend of fashion and comfort, and yes, Kara Swisher has a pair of both, and I bought them. Discover Farragamo's timeless styles with a modern touch at Farragamo.com. Have a question or need how to advice? Just ask meta AI. Whether you need to summarize your class notes or want to create a recipe with the ingredients you already have in your

fridge, meta AI has the answers. You can also research topics, explore interests, and so much more. It's the most advanced AI at your fingertips. Expand your world with meta AI. Now on Instagram, WhatsApp, Facebook, and Messenger. Hi everyone, from New York Magazine in the Fox Media Podcast Network. This is on with Kara Swisher and I'm Kara Swisher. My guest today is Craig Peters, CEO of Getty Images. You've seen Getty's work, even if you don't know it. Getty photographers are on the red

carpets at the Met Gala or at the White House or at the Olympics for sure. Pictures that are then used by media companies across the globe. Getty also has a huge inventory of stock photos and video. It has also one of the largest archives of historical images in the world. You'd think that would be enough, but it's not these days. Craig Peters has been heading

Getty since 2019 and took the company public in 2022. The same month as Bad Luck would have it that Open AI launched Dolly in beta, which means questions about how to protect copyrighted images got a whole lot more urgent, both for creators and for the public. Questions that have not gone away. Fittingly, our expert question this week comes from former time, life, and National Geographic photojournalist Rick Smolin.

Craig has been an auspoken voice on improving standards and protecting creators, but his stance on AI has also shifted in the past few years. So I'm looking forward to talking about how he sees the business model for working collaboratively with this new technology in the future. And at the end of the day, this is about transparency and trust and authenticity. My biggest worry, of course, is if this is a replay of what the internet did to media

the first go round. We'll talk about that and more, including why they can't get fingers right. Those fingers are creepy. Hi, Craig. Great to see you again. Yeah, great to see you as well. Thanks for having me. Craig and I had breakfast in New York, I don't know, a couple months ago, talking about these issues. I'm trying to get my arms around AI and all the tasks and interviewed wide ranges of people. And obviously, photos are, of course, at the center of this. In many

ways, even though the focus is often on text, but photos and video are critical. So you've been at the forefront of conversations around genitive AI copyright issues, but Gettie has also been collaborating and utilizing AI, like most media. I want to talk about all that and where you see this heading for creatives and for the public. But first, I think most people know are seeing the name Gettie images and a caption, but they don't have a good

understanding of the business as a whole. So why don't you give us a very quick talk about what it does? All right. Well, first let me start. We were founded by Mark Gettie of the Gettie family and Jonathan Klein back in 1995. So the company's been around for almost 30 years. We are a business to business content platform. We provide editorial content, so coverage of news, sports, entertainment,

as well as a really deep archive and creative content. So this would be materials that would be used to promote your brand, your company, your product offerings or your services. We provide that in stills, so photography as well as video as you highlighted. We provide that in what we call pre-shot, which means it's available for download immediately. You can search our platform, find content and then download it and use it. In some cases,

we provide custom creation. So we might work on an assignment to cover a certain event. It's just for full disclosure. You've done Fox Media events. Correct. And you then own the pictures, but Fox gets use of them. Correct. Correct. We represent over 600,000 individual photographers and over 300 partners. These are partners like the BBC or NBC News or Bloomberg or Disney or AFP. And what we try to do is allow our customers, really, we try to deliver value across poor elements. So we, again,

we have editorial content. So when you think about creating more efficiently, we'll have a team over 100 individuals in Paris for the Olympics. And we will allow the entities that are covering the Olympics to have access to high quality visuals in real time on big events or stuff you're contracted for. Like you did my last code conference. Correct. And then, but then there's also again, what we call creative, which is, you know,

also referred to as stock imagery. It's intended to be used for commercial purposes to promote a company's brand products or services. Right. You've run the company since 2019. You took it public through a SPAC in July 2022, just about two years ago. Correct. Coincidentally, it was the same month. Open AI made its image generating program, and Dali, available in bed of beta form. What did you think at the time? Did you think N times,

or did it not register with you all? No, I think, I mean, first off, you know, generative AI is not something that is entirely new. Sure. It's something that we had been tracking for probably seven years prior to the launch of Dali. We had first had discussions with Nvidia, who we now have a partnership, which we could talk more about. Yeah, I will. But we knew those models were in development. We knew the capabilities

were progressing. So it didn't surprise us in any way, shape or form. And fundamentally, you know, our business is, is again, providing a level of engagement, whether that's, you know, again, through a media outlet or through your, you know, your corporate website or your sales and marketing collateral in a way that, you know, relies on authenticity, relies on creativity, it lies on core concepts being conveyed. Those are difficult to do. Yeah. So here's this thing coming.

You see it. Obviously, it's been around and they've been generating fake images for a long long time. Absolutely. But did you think, well, look, there's six fingers. It's kind of weird looking, you know, they always have that sort of like a Will Smith picture of a meeting spaghetti. It doesn't look great. Did you think this could be a real problem? Um, I, no, I don't think we saw it necessarily as a problem. I think we saw someone with the

behaviors around it as a problem. So we knew that, you know, hairlines and fingers and, and eyes and things like that were going to resolve over time. I think what we saw though was, was ultimately these platforms that were launching these gendered of models were, were scraping the internet. Correct. And taking third party intellectual property in order to create these services. And fundamentally, we believe that creates some real problems. Yeah. That's tough.

That's tough. They, they like, they've done it for many years now. I would say that there's kind of four things that go into gendered of AI. So GPUs, processing and power and computer scientists and those three things, these companies are spending billions and billions dollars around, right? But the fourth item is something that they call data. I have been to call content. Right. And it is something that they are taking and they are scraping and taking from across

the internet. And that fundamentally, to me, raised some questions about, not only should they be do that, is, is that allowed under the law? But it also creates real IP risk for, and users of these platforms. If they're using them commercially. Yes, but they're hoping to get away with it, just so you wear. But so you push back pretty hard against AI initially. You ban the upload and sale of AI generated images on your database, but it seemed you've done more than, you seem to be embracing

it more. Talk about how it's evolved in your thinking when, because at its base, they like to take things. They've been doing it for a long time. I had a lot of scenes in my book where Google was just taking things and then reformulating and spending the book fight that they had. They use expression fair use quite a bit. Did you have thoughts, change on it? Do you think, well, this is going to happen again and I have to embrace it? I think we thought, we really took

it to prong strategy. And I don't think it's changed or it's evolved. I think it's been pretty constant, but we are not let it's we believe in technology can be beneficial to society. We believe it needs to have some guardrails and be applied appropriately, but we wanted to embrace this technology

to enhance what our customers could do. But we also wanted to make sure that we protected the rights of our 600,000 photographers and 300 partners in our own and try to get this to be one where it was done more broadly, beneficial, not just to the model providers, but to the industries as a whole. And so we pursued a strategy that's really twofold. So one was we did not allow

content into our database that was generative AI. We did that for two reasons. One was that our customers are really looking for authentic imagery, high quality imagery, that imagery that can really relate to an end audience. And we think that that is still best done through professionals using real models, et cetera. You know, accepting AI imagery that's been created from engines that are scraping the internet has potential risk. It has potential risk from copyright claims.

Correct. It also has potential risk from people that have had their image scraped off of the internet generally. And it can replicate those types of things. So we didn't want to bring that content on our platform and then ultimately give it to our customers and have them bear that risk. Because one of the things that we try to do for our customers is we try to remove that risk. Right. Now I'll get to your own kept indemnification. But start with the partner within Vity and more

recently you partner with the PixArts, the digital creation platform. Let's start with the Getty AI tool. Tell us how it works. And just to be clear, you can't produce deep fakes on Scarlett Johansson of President Biden that would inadvertently use copyrighted images like no nights. So it's trained only on our creative content. Again, that's creative content, not a tutorial's content. So to your point, we haven't trained it to know who Taylor Swift is or Scarlett

Johansson is or President Biden. That creative content is permissioned. It's released. So we have model releases from the people portrayed within that imagery. We have property releases, et cetera. So it is trained off of a universal content where we have clear rights to it. There is a share of revenues back to the content providers. So as we sell that service, we actually give a revenue share back to the content providers that its content was used to train. As you said, it can't produce

deep fakes. It cannot produce third party intellectual property. So if you search sneakers or type a generative prompt of sneakers, it will produce sneakers, but it's not going to produce Nike's. If you type in laptop, it's not going to give you an Apple iMac. It is high quality. So you can produce models that are of very high quality in terms of the outputs that it produces. You don't have to scrape the entirety of the internet. This proved that. And it is fully indemnified because

again, we know the source of what it was trained on. We know the outputs that it can produce and they're controlled. And it ultimately gives a tool to corporations that they can use within their business day to day without taking on intellectual property risk. So the idea is to make a better version of this, but there's so many AI generating image products out there already on the market. Dolly mid-journey, open AI is currently testing Sora. As you said, these people have

billions of dollars put it put put to task here. So talk about your sort of unique selling point. Obviously cleaner database, less bias than internet scrape versions. And you you also know these images are backed by our uncapped indemnification. So explain what you mean by that. But let's start with you've got a lot of competitors with tons of money and have a little less care than you do

about the imagery. You know, this model is intended to be utilized by businesses businesses that understand that intellectual property can carry real risks as they use it in their business. Those risks can come with large penalties. And what we are doing is providing with a solution that allows them to embrace gender of AI but avoid those risks and still produce a high quality output. Other tools that are out in the marketplace do bear risk because they have

you know, scraped the internet. You can produce third party intellectual property with them. You can produce a deep fakes with them. And and you know that ultimately we believe is a constraint on the broad based commercial adoption of of gender of AI. This has been a their their their modus operandi for years and years to do this to sort of scrape and then later clean themselves up. Well, I think

you know, it's certainly you know, break things and and move fast. And and you know, ultimately we think that this technology should not just be thrust into the world and then the consequences. We should believe there should be some thought. We believe there should be some level of regulation. We should believe that there should be clarity on whether it is in fact fair use. And so one of the things that we did one of the companies that I'm not sure if you

mentioned which stability AI that launched stable diffusion. I will. We knew that they had scraped our content across the internet and used it to train their model. And we didn't believe that that was covered by fair use in the US or fair dealing in the UK. Right. And so we've litigated on that point and it's progressing through the courts. I want to talk about that. I just like I want to get put a pin in that. But you talk about what uncapped indemnification means.

It means that getting images stands behind the content that the generative model produces. And that you're safe to use it. And if there are any issues that occur downstream, which there shouldn't be, we will still stand behind it. We will protect our clients. So AI gets stuff wrong all the time. Though you worried about promising indemnification. I guess that's what you're promising now. But are you worried about the cost if you do have copyrighted for instance using this technology?

No, because again, we know what it was trained on. We know how it was trained. And we trained it in a way that it couldn't produce those. I think again, when you just scraped the internet and you take in imagery of children, when you take in imagery of famous people or brands or third-party intellectual property like Disney, Star Wars or the Marvel franchise, you can produce and fringing outputs. You can produce things that are going to run a foul of current laws and potential

future laws. This was built purposefully from the ground up to avoid the social issues that are being questioned as a result of this technology, but also to avoid the commercial issues. Right. Right. Which was these companies going to have the same thing they did with YouTube. They eventually settled it out. So I want to get to the lawsuits in a second. But the Picard's partnership is different. I mean, it's a licensing deal. They are using your image database to train

their I.I. And these are these licensing deals. You're not the only one AP is that a license deals will open AI. There's all kinds of people who are striking deals. Like OpenAn, Meta did one with one of your competitors, Shutterstock. Let's talk about those first. Well, I think, you know, I think ultimately we believe that there are solutions. These services can be built that ultimately do appropriately compensate for the works that they're utilizing

as a service. And we think, you know, the models that we've entered into that you'd mentioned. So Nvidia and Pixar, those are not payments to us. One time payment, which is most of these deals or a recurring payment. They're actually, this is a rev share deal. So as those models produce

revenues, they will generate revenues that can flip back to the content creators. And, and, you know, I think some of the deals that are getting done today are trying to, in essence, whitewash a little bit, the scraping that has historically been done and show that they are willing to work with providers. In some cases, it's providing access to more current information. It's difficult to get in real time from scraping. Sure. But ultimately, it's a small drop in the overall bucket.

So talk about the lawsuit against stability AI. Where does it stand? And just again, you're claiming the company unlawfully copied and processed millions of getty copyright protected images, which are very easy to track. I mean, more than text, much easier. So tell us where that stands now, because again, that's an expensive prospect for a small company. Well, it's, it's, it's something that we, we invest that we think it's important. We think we need to get settled

view on whether this is in fact fair use or not. We don't buy into maybe what some of the providers are stating is, is settled fact. It is, in fact, fair use. We believe that that is, is up for debate. And we believe within the realm of imagery, we, we think there's a strong case

to be made that it is not fair use. And we think there are some precedents out there, most notably in the US Warhol versus Goldsmith, which was settled at the Supreme Court, which, which, you know, highlights some case law that, that would say that this is going to be questionable whether it would qualify for fair use. So we, we launched that litigation. It is moving forward a little bit more at pace in the UK. So it is, is proceeding to trial. I expect we'll likely be a trial end of this year,

early, early next. In the US, it's moving a little more slowly as things take time to move through the court system. And I don't have a magic wand to speed that up. Tell us about the calculation of suing versus settling essentially. And, and the cost because here you are, someone grabs something, a yours, you have to get hit on the head, you have to take them a court for hitting you on the head. And, but still you've been hit on the head, correct? Well, I think,

again, it's in a world where everyone's taking our imagery anyways, right? They're just, our imagery is all over the internet, right? As you mentioned, it's on, you know, it's on the verge. It's, you know, it's on, you know, all of the media platforms that you are, you know, visiting on the one mistake I made and top-eareded imagery I paid a lot of money. It was a lot, it was a lot of money. And I never did it again. I'll tell you that. So our imagery is all,

is all over the internet. And then it's being scraped and it's being taken, and it's being used to train these, these models. We want to get, you know, clarity. If in fact, that is something that they have rights to do. I believe it's good for the industry overall. When I say the industry overall, I'm not just talking technology, I'm not just talking content owners, but to have clear rules to the road. Is this, or is this not fair use? And then you know what you need to do

in order to work within that settled law. And that's what we're trying to get to. But let's give back the issue of content creation that you also brought up. You aren't the only one to worry about losing money. Every week we get a question from an outside expert. This week our question comes from Photojournalist Rick Smollin, the creator of the best-selling day in the life book series. You've seen the movie Track, Seeds and National Geographic Photographer Portrayed

by Adam Driver. Let's have a listen to his question. Like many of my peers, my photographs are sold around the globe by giddy images. And we've always considered our relationship with giddy as, you know, symbiotic. You know, we photographers risk life and limb to provide giddy with our

images and in return giddy markets are photographs to buyers. But the advent of AI, it's sort of starting to feel a little bit like a marriage where one partner in the relationship is caught having it affair with a much younger, more attractive person while assuring their spouse, this is nothing more than a dollyance. You know, we assume that our images are being sampled

to generate the synthetic images that giddy is now selling. And many of us are very concerned that our work has become a commodity like coal being shoveled into giddy's LLM engine. And also, sort of like that she did upon older spouse, many of us believe it's only a matter of time before we're cast aside for this much younger and much sexier young thing called AI. So my question to Craig Peters is whether this marriage is over, but you just haven't admitted

to us yet. And you haven't told us, or if it's not over, how does giddy images envision its relationship with actual photographers versus prompt jockeys? And how is giddy going to compensate us your long-term partners fairly for our images in the future? Thanks. Okay, that's a really good question because I hear it a lot from photographers. They don't know who to trust. So how are you compensating the creators whose work is being fed into this tool?

Talked about how it works and can people opt out if they don't want AI trained on their work? Right. Well, first off, let me start with, I think we don't train off of any of our editorial content. And the coverage that he is talking about that he does and that we are fortunate enough to represent would be considered editorial coverage. We believe that the job that he does is incredibly important to the world. The images that they produce, the events that they cover,

the topics that they cover, incredibly important. Right behind my desk, there is an image that was taken in Afghanistan and it was taken by one of our photographers, staff photographers, a guy named Chris Honders, who happened to lose his life in the pursuit of covering conflict around the world. And that's incredibly important. And I have it there because it reminds me that we have a very important mission and that the individuals that are producing this image are

incredibly important and they are taking very real risk. And we value that and we never want to see anything that we do undermine that or misrepresent it. And we think it's important to the world going forward and we think it's very important that that persists. So we do not train off of editorial content. This model was not trained off of editorial content. And we believe that that type of work has a level of importance today and we'll have a level of importance going forward in the future.

And we think that importance only increases as we see more and more deep fake content produced by these generative models that are trying to undermine true knowledge and true truth. So the compensation system stays the same. So the compensation. So again, we have creative imagery. This would be you could also refer to it as stock imagery. Right. Where we have permissions from the the individuals that are contained within those images. We have the right to represent that

copy of my broadly that is training these models. These models when they produce a dollar's worth of revenue. We give a share of that revenue back to the creators whose content was used to train that model. Okay. That is the same share that they would get if they licensed an image. So one of their images off of our platform. So on average, we give royalties out around 28% of our revenue. We invest a lot in our platform and sales marketing and content creation, etc. But on average,

we're sending 28 cents back for every image that we license. And you can think the exact same thing will happen when we license a package of someone subscribes to our generative service. So who owns the images created by these AI tools? And it sounds like in the weeds. That's a very good question. It's unsettle, again, unsettled dog because we're new new territories. So in the US today, you cannot copyright a generative output. I think over time that might evolve to where it depends

on the level of human input and involvement in that process. But right now is not one that you can copyright this image. But what we take a stance is for our customers that are using our generative tool is since they gave us the generative prompt, they are part of the creation. And therefore, we don't put that imagery back into our libraries to resell. It is essentially a piece of content that is quote unquote owned by the customer that produced it using the generative

model. The question of copyright is one that we can't convey. That has to be done by the US copyright office. But we essentially take the position that the individuals or the businesses that are using that service own the output to that service. The only out and it would be a similar thing. But right now, whether they can be copyrighted is unclear, even if so, it is the prompt owner,

the copyrighter. Correct. And I think again, I think that will evolve over time to one where I think the level of human input will have a bearing on whether that is in fact copyrightable. It will all lead its question how much how much you get paid along the line. So you've pushed back in the idea that these licensing agreements are fausty and meaning basically you're making a deal with the devil. Shutterstocks deal with open ads for six years. Well, red teamers at open

air testing Sora. Are you worried that you're all doing what we in the media did with say the Facebooks of the world, the Google's of the world who now sucked up all the advertising money and everything else? What is your biggest worry when you're both doing these deals and suing just waiting for clarity, I assume, from courts and copyright officials?

I believe that many of the deals that are happening today are very fausty and and go back to ultimately some of the mistakes that were made early on in digital media and or in social media, in making trades for things that weren't in the long term health of the industry.

What we are trying to do very clearly in the partnerships that we are striking in the deals that we are striking in our approach is we are trying to build a structure where the creators are rewarded for the work that was included and fairly so. I told you know, I gave you my four components of generative AIGPUs processing, computer scientist and content and that fourth right now is getting a extremely small stake. Some companies are doing

no deals so like the mid journeys and such and they are providing no benefit back to it. We want to see a world where content owners, content creators share in ultimately economics of these. And I think if you equate it again back to a Spotify, I mean, the labels and Spotify can argue over whether the stake of that is fair and that's a commercial negotiation but ultimately it has created a meaningful revenue stream back to artists that support artists in the creation.

And that's what we would like to see flow out of generative AI. That's the world that we believe is right. You know, ultimately the creative industries represent 10% of GDP, more than 10% of GDP in the US and the UK and many developed economies around the globe. And we think, you know, the economic impacts of undermining that contribution to GDP can't be net beneficial to society. No, from an economic standpoint if you ultimately just allow them to be sucked right. I use the example,

you know, everybody's watched the movies, the Matrix. And you know, ultimately what were humans in the Matrix? The humans in the Matrix were the batteries. Yeah, they were used to create the power to fuel the AI. Well right now, I feel like the creatives are already being consumed as batteries. The media is already being a battery. We are already, you know, not being valued, we're basically just being used in order to create, you know, one of the core pillars that is necessary to create

generative AI is just being stripped and taken. And I don't want to see a world where there aren't more creatives. I don't want to see a world where creatives aren't compensated for the work that they do. And I don't want to see a world that doesn't have creativity in it. We'll be back in a minute. Support for on with Carousel Swisher comes from NetSuite. Your business cannot succeed if

costs spiral out of control, but there's so much more required if you want to grow. Tracking data, handling HR responsibilities and mastering accounting are just a few tasks on every entrepreneur's to-do list. But what if you had a platform that could deal with all of them at once? NetSuite is a top-rated financial system bringing accounting financial management inventory HR into one platform and one source of truth. With NetSuite, you can reduce IT costs because NetSuite lives in the cloud

with no hardware required, accessible from anywhere. You can cut costs of maintaining multiple systems because you've got one unified business management suite. Plus NetSuite can help you improve efficiency by bringing all your major business processes into one platform, slashing manual tasks and errors. Over 37,000 companies have already made the move to NetSuite. By popular demand, NetSuite has extended its one-of-a-kind flexible financing program for a few more

weeks. Head to NetSuite.com slash on. NetSuite.com slash on. That's NetSuite.com slash on. Support for the show comes from DeleteMe. Do you ever wonder how much of your personal information is out there on the internet? A lot. Chances are a lot more than you think, too. Your name, contact, info, social security number, they can all be compiled by data brokers and sold online. It's happening right now. It might be why you keep getting those insane spam calls

about your student loans, even if you paid them off 20 years ago. And worse, you might want to try DeleteMe to fix that. DeleteMe is a subscription service that removes your personal information. From the largest people search databases on the web and in the process help prevent a potential ID theft, doxing, and phishing scams. I have really enjoyed DeleteMe largely because I'm a pretty good person about tech things and I have a pretty good sense of privacy in pushing all the correct

buttons. But I was very surprised by how much data was out there about me. Take control of your data and keep your private life private by signing up for DeleteMe. Now at a special discount to our listeners today, get 20% off your DeleteMe plan when you go to joindeleteMe.com slash Kara and use the promo code Kara at checkout. The only way to get 20% off is to go to joindeleteMe.com slash Kara and or the code Kara at checkout. That's joindeleteMe.com slash Kara code Kara.

This is advertiser content from PBS. Jason Scott practically grew up at the Pike Place Fish Market in Seattle. His mom was a fish monger and now he's one too. He sees first hand the toll that irresponsible fishing can take on our oceans. I grew up here. I'm only 51. I know stuff that we can't even get locally anymore because it's been overfished. That's not fair. A common misconception is that wild caught is good and farmed is bad. But Jason says it's not that

simple. I think as a consumer you shouldn't accept if the person selling you you know your fish or meat or whatever it is doesn't know where it comes from or how it's caught or the same. And you know it we're moving a lot of fish. We need ways where everybody can eat it and we're not depleting the oceans. Whether the fish you eat is sustainably farmed or wild caught knowing the

difference is the first step. In PBS's new three-part docuseries hope in the water. Scientists, celebrities and regular citizens come together to explore unique solutions and challenges facing our oceans and how we preserve the food it provides. They know it for you to keep enjoying the reaches of the ocean. You're going to have to also be a friend of the ocean.

Join Chef Jose Andres, Martha Stewart, Shaileen Woodley and Baratunde Thurston as they deep dive into the powerful blue food technologies that can not only feed us but help save our threatened seas and fresh waterways. Stream hope in the water now on the PBS app. So I want to zoom in a bit to talk more about broadly about the issue of copyright intellectual property standards. You've called for industry standards around AI.

What do you think those standards should include? Watermarking, copyright protections. Obviously the more confusing is the better it is for these tech companies who this is me saying this could give a fuck about creatives. They do not care. They do not care. They never have. Well let me start. I mean there are four pillars that we think are important to with respect to visual gender to AI. One of those is transparency of training data so that you know what's being

trained and where they're training. And that's not only important to companies like Getty Images and creatives that are creating works around the globe but that's important as a parent or someone with children like you know if I post this image to this social media platform is it being used to train outputs in these models. So that's one permission of copyrighted work so if you're going to train on copyrighted data you need that permission rights in order to

do so. We want these models to identify their outputs in a way that is persistent. Now that's likely to involve watermarking it might involve some other standards like hashing to the cloud but you're in the technology is still in development but we want that to be at the model level so the model actually does the output and it's persistent. And then we want to make sure that

model providers are accountable for the models they put out into the world. In essence there's no section 230 which kind of protected platforms back in the original digital millennium copyright act which basically gave them you know indemnity for many claims for content that was posted on their platform. We want to see a world where model providers have some level of accountability. Aren't given a government exemption and indemnity. We want to see people be accountable.

Who would you set up the standards in your opinion politicians industry leaders there's business advocacy views like C2PA which is a coalition for content provenance and authenticity which includes Google and open AI and your competitors Adobe and Shutterstock and who should be responsible for enforcing them as a combination of courts and regulations. I'll come back to C2B but I think it's going to be a combination of regulators of legislators and industry which is

typically how all standards evolve over time. I think like happened in the EU AI Act there's a general requirement but the definition of how that requirement actually manifests itself is still yet to be put forward and that's going to take collaboration from the technology industries and from you know creative industries in order to get to something that works. I think the C2PA to be very specific is it is a foundation but there's still some flaws. It is relying upon the end user of

these platforms to identify the gendered of output versus the model itself. That's what happened with YouTube initially as you were talking. Exactly. So if you say if I'm going to use a general model and I'm a bad actor I can say it's authentic and label it as such. Well that's a point of failure in the C2PA model as it exists today. It's already been exploited and we want to see that close again. So that's why we want to see these models provide the output at the model level

versus at the user level. It also is one that they're trying to shift the cost of implementing C2PA to the individual versus the model provider. There are going to be costs if you're producing outputs and then you have to store the output in the cloud, the original and kind of hash it and so that can be referenced and referenced back to. I don't think that's a reality where individuals are going to be making that investment. So you're bearing all the costs.

I believe the model provider should bear that cost. Again we are a model provider in partnership with Nvidia and Pixar and others and I believe that we should create standards that are immutable, that store with it and I think C2PA is a foundation but I think it's one that we need to kind of continue to evolve it in partnership with the technology industry to get it to something that truly meets the challenges versus is a nice kind of cover when you get called to Congress or

parliament or to into the EU. Sure. That you can just kind of throw it out is that we've got a solution. We really want to prevent deep fakes. We really want people to have fluency in terms of the imagery that they're looking at and we think C2PA can evolve to that and we're going to work hard in partnership with the members of C2PA to try to get it there and ultimately what I don't

want to see though, you go back to some of the original mistakes of digital media, such as I don't want to see it be a big cost shift from technology industries into the media industry where what I mean by that is I'm hearing some, well we're never going to be able to identify generative content. No, it's very hard. It's very hard. It's a hard problem. That's very interesting. We can invent these tools but we can't invent a way to identify. So what we want the what we want the

media industry to do is we want to them to identify all of their content as authentic. So they have to buy all new cameras. They have to create entirely new workflows. They have to fundamentally change how they bring content onto their websites. That investment, we want the media to make and I don't think that's the right solution.

I'm not saying that getty images won't invest in technologies in order to better identify track and identify provenance of content but I don't think kind of saying, well, the solution for the tools that were created that create all this gendered of content and are producing the problem of the defects, the creators of those tools don't have any accountability for creating solutions to identify. Absolutely. I think we have to have a balanced

solution. Absolutely. It does play into the hands of bad actors for sure. That's right. There's a really interesting book that I read a couple years back, Power and Progress. And it's an interesting study where I think we've always been adapted. I'm a pro technologist. I started my career out in the Bay Area. But we always assume that just technology for technology's sake will be

beneficial to society. That's what we've been fed and that's what we have been taught. And their study really looks at technology over a thousand years and whether it is innately beneficial to society or whether society needs to put some level of rules around it in order to make it net beneficial. You go back to industrial revolution. Basically, the first 100 years of the industrial revolution were not beneficial to societies. They benefited a small number of

individuals. But largely gave society nothing more than disease. It took things like regulations on limiting work week, regulations on child labor. It took certain organizations like labor unions etc. To ultimately get that technology leap forward to be something that was broadly beneficial to society. And I think that is something that we need to think a little bit more about is, is not how do we stop AI? I'm not trying to put this technology in a box and kill it.

I think it could be net beneficial to society. I think we need to be thoughtful about how we bring it into society. And I don't think racing it out of the box, absent that thought is necessarily going to lead us to something that is net beneficial. Yeah, you're going to lose your libertarian card from the boys of Silicon Valley. Just say you know, you just lost it. We'll be back in a minute.

Have a question or need how to advice? Just ask meta AI. Whether you want to design a marathon training program that will get you race ready for the fall, or you're curious what planets are visible in tonight's sky. Meta AI has the answers. Perhaps you want to learn how to plant basil when your garden only gets indirect sunlight. Meta AI is your intelligent assistant. It can even answer your follow up questions like what you can make with your basil and other

ingredients you have in your fridge. Meta AI can also summarize your class notes, visualize your ideas, and so much more. The best part is you can find meta AI right in the apps you already have. Instagram, WhatsApp, Facebook, and Messenger. Just tag meta AI in your chat or tap the icon in the search bar to get started. It's the most advanced AI at your fingertips. Expand your world with meta AI. This episode is brought to you by Shopify. Forget the frustration of picking

commerce platforms when you switch your business to Shopify. The global commerce platform that super charges your selling wherever you sell. With Shopify, you'll harness the same intuitive features, trusted apps, and powerful analytics used by the world's leading brands. Sign up today for your $1 per month trial period at Shopify.com slash tech. All lowercase. That's Shopify.com slash tech. Speaking of deep fakes, where we go, I do want to talk about the election in political images.

Getty has put editorial notes on doctored images, like the ones of Princess of Wales, Kate Milt, and that were sent from the Royal Palace. You said earlier you don't use editorial content for your AI database, but your competitors like Adobe and Chuttersdark have been called out for having AI generated editorial images of news events in their databases, like the Warren Gaza, which were then used by media companies and others. As we said before, their databases are also

being licensed to train some of the biggest AI models. I think it's the expression in software is garbage and garbage out. How worried should we be about synthetic content, basically fake or altered images impacting the outcome of elections, including ours in November, and those images then circulating back into the system like a virus? I think we should be very worried. I think it ultimately, it goes back to some of the things that why we're really focused in on some of the

the premises of the regulatory elements I talked about. We want to see a situation where we can identify this content. We can give societies a whole fluency in fact. If you don't have fluency, in fact, you don't fact have one of the pillars of democracy. I think it is something that we should be very worried about. I think the pace of AI is moving so fast. The ability for it to permeate society across social networks and those algorithms, I think is something that we need

to be highly concerned about. I think we need industries and governments and regulatory bodies to come together in order to mitigate that risk overall. I'm hopeful that we can get there. It's a tough problem and it's going to take a lot of people putting energy against it to solve for it. What should newsers do about this specific kind of visual misinformation? What responsibility do the agencies have? Obviously, you are not doing that, but your competitors are.

I think ourselves, I think the AP, I think Reuters, I think other news agencies around the world, I think we have to continue to make sure that our services and our content is incredibly

credible and ultimately lives up to a standard that is beyond reproach. I think then we need to work with the technology industry and again regulatory bodies and legislative bodies on solutions that address the gender and of side of things because there are bad actors in this world and there are models that allow people to create not only misinformation with respect to elections, but some very other societally harmful outcomes like deep fake porn that we need to address.

These tools are readily available. What's your nightmare and how quickly can you react when the Kate Mail to things took a few days? It's not the biggest deal in the world, but it was still problematic. That took a while. It's expensive to do this, to really constantly be picking up trash that everybody's thrown down. It's very important that we spend a ton of time, first off, vetting all of our sources of content. It's not something we just take any piece of content and

put it on our website and then provide it out. There is a tremendous amount of vetting that goes into the sources of content that we have, so like the NBC News or the Bloombergs or the world, or the BBC's of the world, but all the way down to the individuals that we have covering conflicts in Gaza or in Ukraine. Making sure that we know that the provenance of their imagery is authentic, we know the standard of their journalism is high. That takes a tremendous amount of effort,

time, and investment in order to do. We need to make sure that we don't, in any way, shape or form, reduce that investment. We need to increase that investment in the face of generative AI. We need to tell that story. We need to make sure that our customers have, who are the media outlets like yourself, know that that is something that we're doing. What's your nightmare scenario? Give me a nightmare scenario is that we ultimately continue to undermine the public trust in fact.

We've seen that over recent history. It's been enabled by some technology and platforms. We need to make sure that we don't restrict the public square in terms of debate and ideas and different opinions, but at the same time that we don't feed into an undermining of what is real and what is authentic and what is fact. Those two things, to me, sometimes get conflated and I

think that's a false conflation. I think they can both stand as true. We can have debate. We can have different points of view, but we can't have different facts and we can't have different truth. Ultimately, I think that's the long haul. It's not a particular event. It's a continual undermining of the foundation of that truth. It is certainly easier to be a bombed through within a builder. One of the issues is the fall off in economy for all media, not just your company,

but everybody. The costs go up. The revenues go down. In the case of media, they suck up all the average highs and you see declines and all the major media who are fact-based, I would say. Getty's value dropped by two-thirds since you went public. I know Spacks have had all kinds of problems. That's a trend in the SPAC area, which is how you went public. But even with all your AI moves, shares are down. Do you think it's because you're taking a more cautious approach than your

competitors? What turns it around? I want you to speak for all of media, but you're running a company here and you're trying to do it a little safer and more factual. That's not as profitable as other ways. How does that change from an economic point of view? I think all of media has this challenge going forward. I can't speak to specific drivers of stock prices or not. What I can tell you is that we aren't taking a more conservative view or cautious view.

We are taking a long-term view. We are taking a view that we are going to spend money and invest to bring services to the market that we think are helpful to our customers. We're going to invest in lawsuits that we think underpin the value of our content and information that we have. We are going to continue to invest in bringing amazing content and services to our customers. We think over time. That is something that will be to the benefit of all stakeholders and get the images.

Our photographers and videographers around the world, our partners around the world, our employees and our shareholders. That's what we are focused in undoing. I think there is a backdrop within recent trading of the entertainment strikes and the impacts that goes against our business. Some of the massive impacts against large advertising and some of the end-use cases where our content gets utilized. Your customers are also suffering too.

Yes. It hasn't been a great environment for media companies in the recent 18 months. I think the average ad revenue. I would say 10 years. More recently, it's been more acute. You're seeing streaming services, cut back, you're seeing ad revenues down, double digits. There are some challenges out there. But you aren't seeing that for the tech companies. No, you're not. Again, that goes where I think we need to have some rules to the road as we approach gendered of AI. We already learned some of

the things that maybe we didn't do 20 years ago or 25 years ago. But our business is about a long term. Getty images has been around almost 30 years. We've focused in on making sure that the foundations of copyright and intellectual property remains strong throughout. That's what we'll continue to do going forward. The hits keep on coming. I don't mean good hits. Then I have a final question for you. Are they ever going to get fingers right? What is the fucking problem?

Well, look, I think if you use our model and I know that you're pretty sure at some time with it and hopefully you'll get some time with it as well, I think you'll see that we largely do. We've trained on some things to try to make sure that those outputs are of high quality. The reality is it does matter. You said going to crap in, crap out. Well, I would argue the exact opposite. So quality means quality out.

And so we've addressed those issues with the model. Those aren't going to be the hard things. Fingers in the eyes. Fingers, eyes, hair, hairlines, smiles and teeth. I think we've largely solved those items. But we're always going to struggle with the solving for the blank page issue that customers have. And how do you really get to authentic?

To say, one of the most interesting things to me, and I know we're short on time, is how generative AI comes directly in conflict with another massive trend of the last 10 years plus, which is authenticity. And how does authenticity, how am I portrayed, how am I viewed, am I positively represented, how am I brought to bear? And what do I see in the content that's presented to me, body representation, gender representation, how do those things come? And I think

that is still a world that that trend is not going away. And I think it's a world that getty images helps our customers address. Although I'm still going round after round with Amazon about my fake books and the fake Kara Swishers who look poor. They just look poor. Strange Kara Swishers are all over Amazon. Go check it out. Anyway, I really appreciate it. This is a really important discussion. People understand what's happening to companies like yours.

Well, I appreciate you make a good time and thank you again for having me. Thanks so much, Craig. On with Kara Swishers, produced by Christian Castro-Russell, Caterio Com, Jolly Myers, and Megan Bernie. Special thanks to Kate Gallagher, Andrea Lopez-Rusado and Kate Furby. Our engineers are Rick Juan and Fernando Arudo. And our music is by Tracademics. If you're already following the show, you're authentic. If not, be careful that you're not being

used as a battery. Go wherever you listen to podcast search for on with Kara Swisher and hit follow. Thanks for listening to on with Kara Swisher from New York Magazine, the Box Media Podcast Network, and us. We'll be back on Thursday with more. Have a question or need how to advice? Just ask meta AI. Whether you want to design a marathon training program or you're curious what planets are visible in tonight's sky, meta AI has the answers. It can also summarize your class notes, visualize your

ideas, and so much more. It's the most advanced AI at your fingertips. Expand your world with meta AI. Now on Instagram, WhatsApp, Facebook, and Messenger. If you've been enjoying this, here's just one more thing before you go from New York Magazine. I'm Corey Sika. Recently, our writer Rebecca Tracer noticed something. Republican and

right-wing women have been flourishing and prospering in the last year. From Marjorie Taylor Green to Kristi Nome, they're tough, they're chaotic, and they tend to have a really great teeth. They're also swirling in the orbit of Donald Trump as he seeks to seize the country with an iron fist this fall. Rebecca Wunder, is this empowerment or are they just Trump's handmaidens? A broader end to explain to all of us what's going on here.

Hello, Corey. I'm going to start asking really boring questions. Here's one. Which of the many exciting, recent dust-ups inside of you to want to talk to and write about right-wing women politicians? The first time I floated a version of this was after Katie Brits post-State of the Union. But I can't say that at that point I thought like I want to do a whole scope of Republican women. I was just really into Katie Brits because it was very old school in certain ways.

Like the kitchen, like it was very white suburban middle class, mommy presentation. But it was also like gothic horror. You know, there's blood of the patriots right here in my kitchen with an apple. But it wasn't enough. I wasn't going to write a whole piece about Katie Brits. It might have been the infomercial that South Dakota Governor Kristi Nome cut for the dental work she'd had done. Where I was like, what is happening with the public women?

Right. Like so Kristi Nome did sort of a physical self-redervation to make herself either more palatable or more powerful. I'm not sure. When she began, she had a very no nonsense. Boxy, polo CS, killerie sometimes haircut that like sort of choppy haircut. And then in recent years since she's become a little bit of a right-wing star, one of the things she's done is really

change her look. Now, Donald Trump is very open about how he feels about women, how he evaluates women and Nome has clearly remade herself into somebody who looks like somebody Donald Trump has expressed physical appreciation for. So part of my question in this piece is what does political power mean if you conform to those kinds of aesthetic standards? But then in some way that winds up diminishing the respect that the people who set those standards have for you.

Your point is a great one that Trump hangs over a lot of this. Both, they're both soliciting him for a big job at the same time as they know he has standards. But also at the same time, Trump's big innovation was like performance is power and they're enacting their own narratives. They've all become Trumpian in their own weird way. Yeah, and I have to tell you that is very frustrating for me because I write about politics and I hate the thing where everything is about Trump.

I always want to make it not about Trump. But writing about these women really challenged that conviction in me because it is clear that at least for some of them, so many of the new behaviors they're enacting are in response to Trump are about the single demand in the Republican Party right now, which is showing him loyalty, fealty to the sky. Like all these people, Valentina Gomez, Laura Lumer, they're enjoying the fruits of choice in career and motherhood. Like does this mean

feminism one? Well, this is what's so dystopian and scary about their project is that they're all doing these things which are really fascinating, right? Marjorie Taylor Greene's lifting weights in a video and not behaving classically, Jameur and all of this sense of empowerment is absolutely what feminism gave to women. Okay, so great. Here is its success, but also the party and the ideology that these women are using these feminist gains to promote is openly dedicated to the

rolling back of those feminist gains. That's Rebecca Tracer. You can read her work on Republican Women and More in your home in our glorious print magazine and at nymag.com slash lineup.

This transcript was generated by Metacast using AI and may contain inaccuracies. Learn more about transcripts.