A Big Week in Tech: NotebookLM, OpenAI’s Speech API, & Custom Audio - podcast episode cover

A Big Week in Tech: NotebookLM, OpenAI’s Speech API, & Custom Audio

Oct 08, 202432 minEp. 818
--:--
--:--
Listen in podcast apps:

Episode description

Last week was another big week in technology. 

Google’s NotebookLM introduced its Audio Overview feature, enabling users to create customizable podcasts in over 35 languages. OpenAI followed with their real-time speech-to-speech API, making voice integration easier for developers, while Pika’s 1.5 model made waves in the AI world.

In this episode, we chat with the a16z Consumer team—Anish Acharya, Olivia Moore, and Bryan Kim—about the rise of voice technology, the latest AI breakthroughs, and what it takes to capture attention in 2024. Anish shares why he believes this could finally be the year of voice tech.

 

Resources: 

Find Olivia on Twitter: https://x.com/omooretweets

Find Anish on Twitter: https://x.com/illscience

Find Bryan on Twitter: https://x.com/kirbyman01

 

Stay Updated: 

Let us know what you think: https://ratethispodcast.com/a16z

Find a16z on Twitter: https://twitter.com/a16z

Find a16z on LinkedIn: https://www.linkedin.com/company/a16z

Subscribe on your favorite podcast app: https://a16z.simplecast.com/

Follow our host: https://twitter.com/stephsmithio

Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

Transcript

There's elements of it that are almost similar to early chat GPT, anyone who's now building a conversational voice product and have access to that level of conversational performance. The majority of people may experience AI for the first time is actually going to be via the phone call. We're taking the oldest and most information dense of all of our mediums of communication and finally making it a most programmable. Some calls are kind of this API to the world.

Within a couple weeks of deploying their voice model, they'd had 3 million users do 20 million calls. Last week was yet another big week in technology. For one, NotebookLM, Google's latest sensation, has been making its way across the Twitter verse, with its new audio overview feature. The feature uses end-user customizable Rags, which basically means that people can create their own context window. They're generating surprisingly good podcasts across 35 languages.

And to add to the voice mix, OpenAI held their developer day and announced their real-time speech-to-speech API, enabling any developer to add real-time speech functionality to their own apps. Plus, they noted a whopping 3 million active developers on the platform. Finally, we saw one video model company, Pika, break through the AI noise with their 1.5 model. He has fodder to discuss what is really required to capture attention in 2024 and beyond.

Today, we discuss all that and more, with A16Z Consumer Partners, Olivia Moore, Brian Kim, and General Partner, Anish Acharya. This was also recorded in two segments, one with Olivia and another with all three partners, so you'll hear this pivot between the two. Plus, Anish actually predicted that this would be the year of voice, despite it never historically working as an interface.

In fact, Microsoft CEO Satya Nadella even previously called the past decades generation of assistants, quote, dumb, as a rock. Well, it certainly seems like returning a corner. Let's get started. As a reminder, the content here is for informational purposes only, should not be taken as legal, business, tax, or investment advice, or be used to evaluate any investment or security and is not directed at any investors or potential investors in any A16Z fund.

Please note that A16Z and its affiliates may also maintain investments in the company's discussed in this podcast. For more details, including a link to our investments, please see A16Z.com slash Disclosures. Another big week in tech, I think the biggest thing I've seen is notebook LM, so just quick recap for the audience.

Google is kind of known for these side quests becoming main quests, and this product actually has been around for a while, it originated in 2023, but its new audio overview feature has been taking over Twitter with these AI-generated podcast hosts, which are surprisingly good, and I'm saying that as a podcast host, you guys have this job. And so basically what people can do is they can drop in their own information in a context window, and then it'll use that to spin up these podcasts.

Olivia, you've actually tried these out, right? Yeah. I think it originated as something for researchers or academics. The idea was that you would store all of your notes, all of your papers, all of your information within this Google workspace. And then this new feature that they've added is these two AI agents essentially that play the role of podcast hosts, and they go back and forth talking about the data, asking questions, getting into examples.

The thing that's really interesting to me about it going viral in the past week or so has been... There's actually nothing that feels incredibly new, or even incredibly in some ways cutting edge about it. Like it's not the open AI brand new real-time model that cuts voice latency down to almost nothing. In fact, with notebook LM, you have to wait three to five to sometimes ten minutes for them to generate the episode once you click the button.

I think what's really striking about it is the realism and the humaneness of the voices, and then also how they interact with each other. Yes. And then towards the intonation, the interruption. Exactly. They disagree with each other. They interrupt each other. This is not just upload a script and get a read out. It does feel like two human beings talking.

And to that point, the other kind of striking thing about it is it's not just repeating or summarizing the points that you upload in whatever data sources. They're actually answering and asking really interesting and deep questions. They're making comparisons. They're making analogies. They're taking it as step deeper of almost like how would you teach someone about this topic? I uploaded basically a bunch of true crime court case filings. And it did a podcast about the case.

And then it spent the last two minutes diving into the ethics of why are we entertained by true crime? Should we be using this information to create media, things like that? So it's really kind of like a next level interpretation of the content, I would say. Totally. I've seen so many examples of this. And uploaded just their credit card statement and they were able to grill them on that. Even that, I don't think the grilling was prompted. Per se it was like, just talk about this.

Find something interesting within this. Yeah. There has to be some sort of very creative element or something behind the scenes. One of the other use cases I loved was someone uploaded their resume and their LinkedIn profile. And it made like an eight minute podcast describing them as this incredible, legendary, mythic figure and going over all the high points of their careers.

I really liked that because I see some people using some of the music LMS and they're using them for, let's say, a really nice birthday song. Yeah. And so when you played with notebook LM, was it the kind of thing where sometimes you're on, let's say, dolly or mid-journey and you're like, oh, it's not quite what I want. And you're just playing the AI slot machine. Was it like that or was it first shot? I'm getting exactly the kind of podcast I was hoping for.

It's a little bit slot machine in that the output is different every time. But I would say it's a lot more reliable in that almost every generation that I would do. Something would be interesting. It would be on topic. It would be usable. One example I did, I got very into it. At first I was sticking to uploading academic papers. I was like, I'm going to use this for its intended purpose. And then one of my generations, I was like, the host, they sound like they're flirting with each other.

Right? Yes. They have such good chemistry. And so I was like, what would happen if I upload literally a one sentence document that's like, I think you guys are in a secret relationship. And they went off on like a two to three minute podcast that sounds I swear like the meat cut in a romantic comedy or something. It's incredibly emotionally compelling, I would say. And so now my vision, I have to do like a full audio drama. Then we have to get into that. We have to get into that. Exactly.

It'll be like the first fully AI avatar movie using the voices inspired by the notebook LM characters. This one's about AI, but like AI in relationships. Really? Yes, specifically AI that are like hosting a show like us. Interesting. And Google's notebook LM environment. Oh wow. So like, could we be secretly dating? Exactly. That's wild. That's what the document asks. Someone thinks we're giving away like secret love notes to each other through our banter. Well, what was the end? Do they agree?

I mean, you have to listen to it and get your take. What if those AIs, you know, actually developed feelings for each other? Like real feelings. Yeah, exactly. So it's like you're saying two lines of code could fall in love over a spreadsheet or something. That's the idea. Yeah. It's kind of wild, but also kind of, I don't know. I know, right? Intriguing. And so given that you have played around with it and that a lot of the feedback is really good and people are pleasantly surprised by this.

What's your reaction? Because like you said, there are products like this out there. I mean, with AI, there are so many trends as we've seen, like products that get really hot one week and then something more interesting comes along. Could be me just being optimistic. It feels like there's something here. And I hate to make this comparison, but there's elements of it that are almost similar to early chat GPT.

And that one, it's really usable even for people who are in academics, people who don't know that much about prompting. Anyone can upload a paper and kind of generate a podcast. The other thing that feels chat GPT-esque is like people are already pulling it quote unquote off label. And maybe it's not notebook LM itself that becomes the winning product. We'll see. I think there's a lot Google could do to extend this more. They could make it a mobile app. You could customize the voices.

I could see it being used for kid bedtime stories if they tweaked it a little bit. But I think something about the format of personalized podcasts or personalized audio is going to happen. Some of the experiences or the podcast being generated are no doubt impressive, but also feel a little maybe gimmicky or like cool ones. But is this really something that you can see evolving into something practical, useful?

I for one can see it actually becoming a real product because right now it's doing podcasts for example, but over time it may be easier to add avatars or videos as backdrop of what they're talking about. And that becomes basically short from YouTube video that is very personalized. So one of the fun examples was like kids love Minecraft. I love Minecraft.

When there's like a new bad rock edition that drops and there's like a release note that are pages and pages long and kids rely on YouTube to figure out what's new, like what change. If you drop the release note into notebooks LM and just say, tell me what's new and tell it in a way that kids love. And then it generates this 20 minute or 10 minute back and force of a, can you believe this new update?

It allows this character to fly, but those are the type of things that actually becomes really interesting in a everyday use case. It makes me want to have like a digital diary or something where you can upload it and then it gives you like a podcast of like how the last month of your life is bad. Oh my God. Yeah, that's fine.

Because the innovation is less like a new medium and more how they've really unlocked something to your point around how to make any topic exciting and generate insights and make it something that you really want to listen to and spend time on. It's a true potentially unlimited outputs. I totally agree. It could be videos, it could be avatars.

The interesting thing about that is I'd always thought of it as you can read something, watch something or listen to something, but maybe a nuance of listening is listening to it in conversation format. I do think there's something really magical about this. It's the two hosts going back and forth on the other.

Yes. I talked I saw yesterday that had two million likes completely organic and it was a law school student who was studying for her midterm and she had uploaded like, I don't know, 60 pages of lecture notes and then it generated a 12 minute podcast for her to review before the exam. If you even hear another human being telling a story around an example or a case, it makes it so much easier to remember and understand. You're basically opening up another lane, right?

Because you can read something as you're listening to something as you're immersed in something else in the real world. Maybe another thing to talk about is OpenAI's DevDay. They released a lot, but maybe the highlight point was this real time speech to speech API. And I know you've thought a lot about this idea that real time really matters for speech and that latency is almost like a metric that we're going to hear a lot more about.

Yeah. There's a threshold above which voice doesn't really work as a modality to interact with the technology because it doesn't feel real. And below that threshold, which is maybe three or four hundred milliseconds, it sort of holds the illusion of talking to a person. Phone calls are kind of this API to the world. So it feels like the way that the majority of people may experience AI for the first time is actually going to be via the phone call.

And that is unlocked by this real time technology. And the crazy thing is like so much still runs on the phone system. Absolutely. Even if you just think about one vertical like health care, it's like taking incoming calls from patients. It's like doctors calling other doctors calling pharmacies insurers. So if we think about how this becomes more real time, are there different applications that you think are unlocked like a, let's say, music education?

How does real time voice maybe change some of those industries? Most of the EdTek products we've seen so far have been like, you attempt to home more problem maybe then you take a screenshot, you upload it to an AI product that tells you if it's right or not. And now with real time, both voice and some of the video and vision model stuff, it's actually almost like having a tutor sitting next to you going through it with you, even with some of the vision stuff, show it your piece of paper.

So now it's like AI is moving towards actually helping you learn versus a lot of the use cases so far have been maybe cheating adjacent in like, how do I just get to the answer? Now it's what is your process? That's actually really interesting. You're basically saying that in a way the lack of latency allows for people to integrate in that moment. Yeah. And in the past, maybe because there was more latency, people took shortcuts because they didn't want to wait.

Or if it's with you, it can say, here's the way you're doing it. Here's another way actually that might make more intuitive sense for you to solve this math problem. It's going along the journey of understanding with you versus just being kind of answers or outcome base, which a lot of the AI EdTek products have been historically. What's really interesting about that is that there's a sort of design language or design cues that are already built into conversations.

So interrupting is one or the sort of, uh-huh, uh-huh is another. So that actually should unlock much more interesting product experiences as well. Because, and of course, the latency is necessary for that. But so is the ability to even understand these parts of sort of, I don't know, and they're not quite nonverbal, but they're not also a part of the explicitly spoken language. A lot of products, especially in consumer, it's not just about being optimal, per se, or perfect, right?

In fact, what a lot of people are commenting on when you see the notebook, LM examples, it is the filler words, it is the interrupting, it is the imperfections that people are drawn to. This is a big step forward.

And for anyone who tried to use the Chatchee BT voice mode before, essentially, you would press a button, you would say something, the LLM would pause, it would interpret it, it would generate something to say back, and then it would return an answer, but it take at least a couple seconds, it was very buggy, it was very glitchy. It was more like sending a voice memo, having someone hear it, and send back a voice memo than having an actual live conversation with a human.

And so the new model is truly more like almost zero latency, full live conversation. This has been available through Chatchee BT's own advanced voice mode, which people are using and loving. But what happened this week at developer day was they're essentially making that available via API for every other company.

So anyone who's now building a conversational voice product and have access to that level of conversational performance, which is huge and really exciting, because it brings a lot of AI conversation products from barely workable, not really workable, to suddenly extremely good and very human-like. Yeah, totally. You had a tweet that said, this is a massive unlock for AI voice agents. I'm expecting to see a lot more magical products in the next few months.

We're quickly leaving the era of latency and conversational experience being a blocker. Can you speak just a little more to that in particular? Yeah, absolutely. Many of their AI voice products didn't really feel even SMB caliber in terms of quality, let alone maybe like an enterprise could actually deploy this. So now it is, I think, arguably enterprise quality in terms of real companies being able to replace humans on the phone with an AI on the phone.

We're seeing this for all sorts of use cases. The most obvious is maybe having someone answer the phone at a pizza shop to take orders or at a small business to book nail appointments, all the way to things that are a lot more complicated, like even doing interviews, first round interviews with AI, which is crazy to think about, but it's happening.

Or even more kind of vertical specific use cases like freight brokers spend all day on the phone, calling carriers, calling truckers and trying to find someone to haul a load in a certain price range. Now you could do that with an AI that can call 100 carriers at once and negotiate the price instead of having a human being do those calls sequentially all day. This new API and there's other open source attempts at the same type of model is really going to allow those products to shine.

Yeah, and some of the products you're describing are kind of voice first. But many of the apps that we've had to date are typically not voice first, perhaps because we actually haven't had the technology. And so I want to refer to Anisha's big idea at the end of 2023, which right now feels very accurate. Yes, he was right on. Yeah, it said that voice first apps will become integral to our lives.

And he basically says that despite voice specifically being the oldest and most common form of human communication, it's never really worked as an interface for engaging with technology. It feels like voice is one of the biggest things that's being unlocked by AI. Voice is the easiest content to create and we're all creating audio all day, every day, essentially. But that content has never really been captured or used or automated in some ways.

Like now, even outside of real time, there are so many products that will listen to your meeting and will hear you say something and can automatically slack someone with a follow-up or use it to trigger a commit and get hub or a task on a sauna that your team has to follow up on.

And so I think what we're seeing now, both real time voice and non-real time voice is we're taking the oldest and most information dense of all of our mediums of communication and finally making it almost programmable and usable in a really powerful way. The one thing I think we didn't quite predict when we were forecasting voice for this year was that it's really, really been working for B2B and not as much on consumer quite yet. We're getting there.

I think on B2B, even thinking about the voice agents, a lot of businesses are struggling to find people to answer the phones for all sorts of roles. They're struggling to retain them. It's expensive. And so it's supernatural to plug in an AI that can perform at similar quality. The consumer use cases are a little bit less obvious. It's probably worked the most in companion so far. So again, chat, LGBT advance voice mode or character AI.

I think they announce within a couple of weeks of deploying their voice model, they've had three million users do 20 million calls. Really? Yes. Because if you're spending hours each day anyway talking to this companion, giving it a voice and making it more real makes a lot of sense. So that, to me, was like the shining star of voice so far. OpenAI did highlight two other use cases on developer day in consumer.

And both of them were actually these kind of high touch expensive human services almost that are now democratized with AI. So one of them is a company called Speak that does language learning. This might be controversial. I love dualingo as a product. I love it as a brand, but I think it's hard to use it to learn a language and end because it's just limited as an interface. So if you really want to learn a language, you might have to pay someone, I don't know, $50 to $100 an hour to tutor you.

And so the idea of Speak is you have an AI voice agent that is essentially your language tutor. And it's much more accessible and affordable. So that was one. And the second one they highlighted was what if you had a nutritionist via AI? So this is a product called Healthify where you can send in photos and then talk live about what you're eating every day and your diet. So I think we'll see more of those use cases unlocked with better voice model. Yeah, I need that.

I've been saying for a while. I didn't think of it specific to voice, but that I need an AI to just call me out on my BS. Yes. You said you were going to run. Yeah. You didn't do the things that you said you were going to do. But also what you're describing, you use the dual-lingo versus Speak example. But in a niches prediction, he also talks about how yes, some of these big companies are going to integrate these APIs or integrate this technology.

But Gmail probably still going to look like Gmail. And so how do you think about that balance between the incumbents utilizing this technology and then what's going to sprout? That's completely new. It's really interesting and something that we watch really closely in consumer in particular because you would think that the Google's, the Microsoft's have all of your data. They have all of your permissioning. There's a lot that they could do.

I think what we've seen is they're structurally and some ways disadvantaged in building towards this AI shift in a really native way. One, it's like these are big companies now. They have a lot of people. They have a lot of competing priorities. And then the second thing would be in some ways they would cannibalize their own products.

Like RV was been Google is likely to maybe add AI to augment Gmail, but are they likely to create the AI native version of Gmail that you could only conceptualize in the past three to six months, probably not, just because again of how big of a company they are and the fact that they have so much riding on the continued success of the existing product. A good example of this is actually Zoom added transcriptions. Are people using that?

Yes, but there's also been a ton of products that are independently successful in doing AI meeting notes and those largely are building towards more specific and opinionated workflows for different types of jobs or tasks. And it's just something that Zoom is never going to do because they're such a broad-based platform. Talk about a completely new platform like Imagine Zoom, but it's asynchronous. Yes, right. They're never going to build that to a point. Exactly.

Because they're inherently synchronous. Clearly open AI is investing in voice, right? And that's not necessarily a given, right? If you think about they also do imagery, they haven't really talked about Dolly in a while, right? They also do video. Sora came out a little while ago, but there really seems to be this voice push despite them operating across modalities. Is that a signal people should be paying attention to? I think so.

I think we've seen already, almost even though it's still so, so early, like, eras of AI so far. Creative tools was the first era and still a massive era. And I think we saw a ton of investment in image generation, video generation, music generation, much of which is still happening.

Especially, it feels like as AI moves from pure consumer use cases into more kind of controllable, highly monetizable enterprise use cases, it does feel like voice is kind of a unique unlock in that it's a real game changer for companies in particular to be able to capture and utilize this audio data that they never have before.

Maybe another thing worth talking about here from DevJay is that they announced that they have three million active developers in the ecosystem and they tripled the number of active apps in the last year since you've been studying consumer for so long. Maybe ground the audience and how much quicker is this happening per se than let's say the app era when Apple releases App Store.

How long did it take for three million active developers to be building on it and just how big is that kind of number today? Yeah, that's a great question. I have a great idea. As you were asking the question, I was like, do I know that for App Store? That's right. Well, it took, I assume, years. Three million developers, that's incredible.

Like my math was like, look, like I don't know the App Store number, but let's say each developer has a ability to, I don't know, like maybe reach out to hundreds or a thousand unique users. That's sort of how I think about, right? Basically the reachability of what they're building. So I think the other question is like, what is the revenue per developer in the App Store and is that a proxy for an AI? Yeah. That's super interesting.

That's the data that I think is what I put out where you look at, it's not necessarily the App Store ones, but it's the SaaS. Like historical SaaS companies versus Gen AI companies. And how the Gen AI companies are reaching a scale of revenue way faster than their SaaS counterparts. Very interesting. Yeah, I think a big part of that though is because Gen AI is so well set up for consumption revenue. And so many SaaS businesses are SaaS.

They're like, you pay a fixed fee for the service monthly and with a lot of these new businesses you're paying on a consumption basis. You're also pricing it as a subset of labor costs, which are traditionally priced for higher than software costs. I think that's like a far more compelling argument for a wider revenue ramp is much faster versus I think the reason why the report said was because the Gen AI companies require training cost upfront.

Therefore they're imperative to make money as higher than SaaS, which maybe, but we know the ones that are making money aren't necessarily incurring a huge training cost upfront. Much more likely as they were placing labor costs or is just so useful or so unique that willingness to pay is just higher. For sure. I mean, I might buy that argument in consumer and that the willingness to pay of consumers is way higher post Gen AI than pre Gen AI.

So maybe, but for SaaS, I mean SaaS businesses have always existed to make money. But the developer community, three million people actively developing on it today based on what how old this is platform like that is incredible.

Yeah. I also think I'm seeing so many people who wouldn't have previously called themselves a developer or creating just really small apps or even even using the API for themselves in a way that if we use the parallel of the app store in the past, you weren't really creating an app for yourself back in the day. That was like the barrier to entry for that would just be too high and it just wasn't on many people's radars.

You know, the story of a lot of productivity and prosumer companies is enabling app creation. Like notion is a big app platform. Actually, people have created these like daily habit track wraps and a bunch of other things in the notion app store. We didn't see a store on top.

Yeah. Totally. Yeah. Air table, obviously, this product's like retool, but there's a lot of people who have been or at least just like latent demand to make apps, especially for people that are non-technical in a business context or a hobbyist context. And I think the AI, I know the AI, I think is really unlocking it. Yeah. The app store example is a very good one because we're seeing this maybe fragmentation in a positive way of the types of developers that are building on open AI models.

There's literally people who we talk to who are like, I'm never going to raise venture funding. I am printing cash. Basically, I'm making a million or two million dollars a month off of this. Not only thin, sometimes very sophisticated kind of products that targets maybe a really specific use case. So we see that and that could be an open AI developer. But also we could see a developer who's, no, I'm going to build a $50 billion company utilizing or fine tuning these models.

So similar to the app store, we saw a big range of people who are like, I'm just going to be a solo per new or making an app to I'm going to build a generational business on top of the app store. Maybe the difference to me here so far has been kind of like as with everything in AI, the slope of the curve or the speed of ramp.

I don't think we often saw, especially in the early days of an app store, solo per new or is making millions of dollars a month, that's something that has been very uniquely enabled by AI. Yeah, and you see this overlapping with the code LLM space right? Yes, exactly. Exactly. Exactly. Exactly. Exactly. And all of these tools that allow people who couldn't code before to become a developer. Totally.

Yes, you don't have to be a developer or a designer or there's so many skill sets now that you can abstract away to AI as long as you have good taste and good ideas. That tooling did not exist in the app store era and now exists in the AI era. Well, maybe to that end, clearly there's a lot of building happening and we've talked about this before, but I'd love to talk about the playbook, right? Because you're going to build something within AI.

It's more competitive than ever to get that attention. And so maybe one frame for us to talk about that against is Peacas launched 1.5 this week. And I just saw so many meme videos. It was so viral, people squishing things and flading things, right, taking a meme and distorting it. Yeah. Exactly. It was actually really fun. So in a pretty intuitive way, I understand why that kind of model went viral, but we are getting to the point where is there fatigue when someone releases a new model?

I'd love for you to just maybe break down what you might call like the anatomy of a successful launch in this world. If you think about video as a category, when Sora first came out with their examples, these were blown, minds were blown. And I think that became this like front of mind of, oh my God, you can create and generate videos. Now, the interesting thing about video is that it's not all created equal, right?

There's a character centric video and then you have more of a scene generation video. What is happening in the scene, the content density of the video always mattered, right? Slow motion movement of the scene is video, but it's a lot less interesting. It's like walking around a garden, interesting, but cats moving. Cool. What we're seeing now is these products are becoming a lot more opinionated and a lot more specific, if you will.

So we talked about Pika, but you also have the likes of Bigel where it's templatized of what you can do where Lilliyati like dense walk out scene. That's very opinionated like it's not any video. It's a very specific movement and scene where you're putting yourself in. It's the same thing where all this sort of templates that are going viral are you take a specific object in the video and you're modulating it, whether you're squishing it, blowing it up, like inflating it floats away.

It's sort of unexpected. It is unexpected what's happening in the video, right? It's not a cat walking and it's a point A and might go to point B. How interesting. You don't expect the meme guy looking at another woman to actually be squished in a picture or you don't expect all these different meme characters to be blown up all of a sudden. I think that unexpectedness is sort of the next evolution of what's happening.

Yeah, I mean, one thing that was really interesting there is there's a subset of things that people expect from video and with AI, it's not enough to just give people that. Or maybe there is some subset if you're creating a stock video company that's one thing. But in order to go viral in order to garner attention in this very busy world, you need some sort of not known quantity and opinionated point of view on what that should be, right?

They could have easily said, oh, like we want video to be longer. Because that's hard. That's really hard. Like, 30 second video with some consistency in the scenes are difficult things to do. They could have done that, but instead the team decided, you know what, we're going to pick like objects in the scene and do weird stuff with it. Do you think that's required now to basically design around some sort of viral element?

I think if there has been a large shocking development in the underlying modality, again, video with Sora type, like you do need some unexpected element of, again, opinion to garner attention or the quality just needs to be order of magnitude better, not just 20% better, but much better than I think you get attention. But that's the underlying text stack evolution, which I think we'll continue to see as well.

So I wouldn't say it's like a playbook of the only way to do it is to come with wacky, like very attention-grabbing things. There's of course the underlying technical evolution that will continue to sort of push the boundary forward. All right, that is all for today. If you did make it this far, first of all, thank you. We put a lot of thought into each of these episodes whether it's guests, the calendar Tetris, the cycles where they're amazing editor Tommy until the music is just right.

So if you like what we've put together, consider dropping us a line at ratethispodcast.com slash a16c. And let us know what your favorite episode is. It'll make my day, and I'm sure Tommy's too. We'll catch you on the flip side.

This transcript was generated by Metacast using AI and may contain inaccuracies. Learn more about transcripts.