¶ Intro + What is Built This Week?
This is a tool we call it Ryz Lab's video studio. There's the dog skateboarding in in Venice, Italy. Replit is a platform that allows you to vibe code code, web applications. It's still so early for AI video generation. I think, like, the more realistic things look when you see video, the more surprising and shocking it is. Built this week,
Welcome to the inaugural episode of Built This Week. I'm your cohost, Sam Nadler. I'd like to also introduce my cohost and business partner, Jordan Metzner. How are doing today for our inaugural episode, Jordan?
Hey, Sam. Welcome to Built This Week. Super excited to kick this thing off. Let's get started.
Let's do it. Before we get into our docket, what is Built This Week?
Built This Week is a demonstration of some of the applications that we've built here internally at Ryz Labs and around for some of our other companies leveraging AI, and also some cool new AI tools in the market, as well as AI news and other cool things that are happening. You know, every day we're building new products. The market's changing quickly. There's new models. There's lots going on, and, we're just super excited about that and, you know, I wanted to share some of those tools and features with the rest of the world.
Exciting. So just for context, these are tools we for the most part have either built within the span of a week or even less. They can be tools that we internal tools that we built for our team, they could be tools that we're using to grow our businesses or but primarily built using, you know, a portfolio of different AI tools on the market. I would say that there are tools that you know, aren't just one to two hours of vibe coding but require a little bit more work, couple days and a little bit higher level than some of those videos you may see on TikTok or other social media platforms, but they theoretically should be things nontechnical people could be able to do. Correct?
Yeah. I mean, I think there's probably some level of technicality needed, but, yeah, there's just so many new cool AI features and tools and APIs and models that depending on your business and what your needs are that, you know, really quickly you can leverage them to to automate things in your business as well as scale them up, you know, and generate new content. And, you know, in our case today, we're gonna talk about, you know, video generation and and image generation, but, know, for our businesses, we generate a lot of content for social and it's just it's critical to to try to get as much content out there as possible and that's usually the biggest I guess, like, the biggest time suck is just generating content itself and like, you know, before AI, you know, we had to generate that all manually or use stock photography or stock video, which, you know, just doesn't look that good. So luckily with AI, we can be really creative and while it's not perfect yet and photorealistic, you know, you can play off of that to, you know, try to be funny and creative and and make new kinds of content.
¶ Launching Ryz Labs Video Studio
And, yeah, I mean, in the case of the product we built today, it's been, you know, a huge impact on our on our social teams.
Perfect. So let's actually perfect segue. We're gonna jump into the tool for the week, and this is a tool we built internally about just before we even get into what the tool is. Number one is how long roughly did it take us to get this v one out? And then what was generally speaking the purpose? What was the problem we were trying to solve? And then we'll get into the tool itself.
Yeah. Good question. So, you know, there's been a lot of marketing and promotion around new video generation and image generation tools. And, you know, we currently offer a lot of different AI tools available to the people that work at Ryz Labs. We have what we call like AI Ryz Chat where we've built our own kind of like a ChatGPT model where users can pick from all different types of models from Anthropic to a ChatGPT to Claude to DeepSeek and then just, you know, chat with any model they want.
And what that allows us to do is avoid, like, high subscription fees over a large population of people and instead leverage, like, API keys as a consumption layer so that just drives down the cost of the use case, and you just kinda pay for what you use. And it opens up, like, kind of the amount of, models and tools that that you folks are available to use. So instead of saying, like, okay, I'm gonna choose, like, a subscription to Anthropic or a subscription to ChatGPT for my folks, instead we're building a tool that lets them have, you know, access to all those models simultaneously while not even paying any subscription fees. So kind of you get the best of both worlds. And I think, like, probably the launch of, v o three, the newest, Google model was that was probably, like, the biggest pivot.
You know, like, Sora from OpenAI, their video model was, made some big rumblings, but, know, the quality of it has not been that great. And then when, you know, Google dropped the o three, it seemed like, you know, kind of we had a step function improvement. And then right when that dropped, we've seen a bunch of new models from there. And at Ryz Labs, we do a lot of social media content. We have a lot of brands.
We have a lot of different businesses. You know, we're producing for all different types, you know, for our own businesses, for clients, kind of all different types of video content. And usually, that's the biggest bottleneck is just like the time it takes to produce videos, not just to the editing of them, but just like creating them and filming them and, you know, we'd have to do photo shoots and video shoots and all that. So, you know, with the advent of these models, our goal is to be able to produce more content faster and better. And yeah.
So, you know, there was just a lot of demand from the social team of, like, how can we get access to Veo three? And, you know, we wanted to kinda do something pretty similar to what we did with the, you know, chat by giving access to not just Veo three, but as many models as possible or, like, top tier models as possible so they can generate their own content. So, you know, we built our own, like, video generation tool. I recently just added image generation. You you know, it leverages a bunch of different models, so it allows our team to create their own content pretty rapidly, and I'll show you a little bit of, like, how the tool works and, you know, as well as how I built it.
¶ Why AI Video Generation Matters for Startups
Perfect. Before we get into the tool, how it works, I think you explained why it's important to provide, like, this portfolio of different tools for our team to use and not have to pay subscription and and kind of in one central place. But, you know, I think we all recognize at least right now, veo three is kind of the the best tool out there. So why wouldn't we just give access to veo three?
Yeah. That's a good question. So, you know, price and consumption are important as well. So, you know, Veo three right now is, I would say, like, pretty expensive. It's about $6 per eight second generation, and the generations are not perfect.
So if you wanna make a thirty second commercial, you're gonna generate at least four videos, presuming that each output is perfect exactly as you need. If some of them you have to regenerate or you wanna have more than four scenes in that thirty second commercial, you know, you could be generating 10 clips and spending, you know, $60.80, a $100 in in just a matter of a few minutes. So, it definitely gets expensive quickly, depending on what you're doing, and it's expensive as relative. I mean, the quality of the video coming out of the o three is so good that if you compare that to what it would cost to set up a, you know, a full day of shooting, of course, you know, the o three is much cheaper. But from, like, a startup perspective, pretty easily you can spend hundreds if not thousands of dollars if you don't pay attention.
So I think it's all relative. Would say like if you were like a marketing agency and you're doing, you know, filming expensive commercials for big brands, then, yeah, spending probably a few $100 or thousand dollars on generations of concepts is is probably pretty cheap.
Okay. Cool. I do wanna transition to the tool itself. But what I would love is a, kind of an overview of the tool, and then b, like, how was it actually built? What were the different pieces you pulled together to to make, you know, the beta alpha version work?
What were some of the very early quick iteration features that we made it that we iterated upon to get it out to our team? And then once the team's been using it basically for the past week, what were some of the additional features we've laid on recently? And then lastly,
how
have they used it in the past few days?
Okay. Now let me see if I can share my screen. Alright. So you can see the blog. You can see the website.
¶ How Veo 3 Pricing Works ($6 for 8 Seconds)
Right? Okay. Cool. Alright. Let's jump into the tool. Okay. Cool. So, yeah, this is the tool. We call it Ryz Labs's video studio. We just added images, so maybe now we need a new name for it.
What I've done is I've gone and find kind of the top best video generation models on the market today. In the case, that's these, six right here. They're all a little bit different. They're all specialized in kind of different output qualities, and kind of the only way to really know which one's the best for your use case is to just generate a bunch of samples and see which one works the best. Some of the pricing I've included here as well.
So, they're all over the place. This is, you know, 30¢ per second. We know this, Veo three is six dollars per eight seconds. You know, this one has a has a token quantization pricing. So they're they're all a little bit different, but, essentially, the way the tool works is you pick a model.
In this case, we can pick something like Seed Dance. We have all the configurations here, so you can start with a reference image. You can change your frame rate. You know, for now, let's just leave kind of all of this just as it is. And you can start with, like, a a dog skateboarding
in Venice Beach.
Oh, cool. Well, why don't we go to Venice, Italy?
Oh, nice.
Okay. Cool. So, dog skateboarding in Venice, Italy. And then I built this, like, enhance with AI, which basically, you know, I highly recommend using something like a ChatGPT to enhance your prompts. Inside the tool, I just built that.
But basically, what you do is you take this and you you kinda enhance it with with ChatGPT by telling the model that you're working with. You tell ChatGPT the model that you're working with, and you can get kind of a more enhanced prompt and, you know, here it is, let's capture the dog's joyful presentation, you know, as he's on the iconic Venetian architecture and gondolas. So anyway, wrote a really nice prompt here.
But wait really quick, the enhance with AI was one of those kind of second features we built after initially using the tool and seeing kind of the video quality and then, you know, understanding that obviously the prompt to is so important in creating the types of videos you want. So that was I believe one of our first features that we built, you know, right as soon as we launched the the initial product.
Yeah. I mean, you know, I wrote like have a dog skateboarding in in Venice, Italy and, you know, here it says a golden retriever skillfully skateboarding through the vibrant streets of Venice, Italy, capture the dog's joyful expression under the warm afternoon sunlight, you know. So it's just so much more robust in helping the model generate a high quality output versus, you know, something as simple as the the initial prompt. Both will work, but quality of output will will definitely have an impact. Another thing to note is all these models kind of have different times to generate.
So, you know, the price for the most part is really relative to how long it takes to actually make the video. In the case of something like vo three, sometimes takes up to ten minutes to generate like an eight second video and hence, you know, why the price is so expensive. In the case of c dance, which is, you know, ByteDance, their newest model, you know, might maybe take just a few moments, to make it. So what I do here is after I generate it, you know, it'll say it's in progress, know, and then, we built a small, video history page here and here you can see, the video is being generating. So I can just keep hitting refresh.
It's it's processing and hopefully here we go. Awesome. Okay. Cool. So, if you see here, there's the dog skateboarding in in Venice, Italy. Cool. And, you know, I can make this mini mini bigger. I don't know if you guys can see it bigger.
Yep. Cool.
Alright. I mean, pretty good. So, you know, what else can I do with that? So obviously, like, I can download that. I have a copy button here in case I wanna use that prompt again.
¶ The Stack: Replit, Replicate, and GPT-4
And then I also have like a share link as well, so if I wanna share that with someone or I could I can throw it in the trash. So, you know, if we just use this as an example here, I'll just take the same prompt, I can go back to my model, I'll pick something here like veo two which is, you know, Google's other model, the previous model to veo three, I'll paste the same prompt in and we can generate a video. And yeah, it'll take a few moments but we'll see the quality of the veo two video versus the veo three video and there you can kind of compare and say like, is it worth it? I would do a veo three but it's gonna take like ten minutes right now so it's probably just not best to do it, you know, while we're doing it here.
And that copy prompt feature is also one of those like quick iterations we made after we started using the project product and we wanted to, you know, test the prompts on different platforms to see which one worked best or we wanted to make a small tweak because it didn't generate exactly what we want. And instead of trying to like recreate the prompt or you know, save it somewhere else, we created this copy prompt feature which has been really really useful.
Yeah. I think I mean, it's easy to see where our road map is going where, you know, you write one prompt and then we can generate multiple videos across multiple models. So we can go horizontally, so, you know, produce the same video across all different models at the same time, as well as go vertically across each model. So make multiple generations using the same model with the same prompt just to get different outputs as the model will not always produce the same output with the same prompt. And so you can get to a point where, you know, with one single prompt, we'll generate maybe 25 videos or something like that.
And so your ability to bring all these assets together, you know, is super feasible. And again, you know, it's just simply a a limitation of cost. So, here's using Google Vayo too. Here's the dog skateboarding in Venice. This one looks, you know, in some cases a little bit more realistic in the in the sense that it looks a little bit more like Venice, golden retriever.
But you can see the video quality a little a little low on the, you know, on the pixels, etcetera. Cool. So so that's how it works.
What a great tool. And and I wanna get into like how our team has specifically used this tool in the past few days. I mean, we've quickly made, two social campaigns and TV campaigns that I think are worth worth talking about. But before we do that, I want you to walk me through kind of the major building blocks of getting this out the door in just a few days. What tools did you use? What were some of the roadblocks you encountered? And yeah.
Okay. Yeah. Happy to do that. I think one thing I should show you, before we do that, you know, right now would be a great time to show kind of the capybara commercial.
Capybaras are the most unique and relaxed creatures in the animal kingdom, yet the trials of modern life sometimes push their limits. But nature, in its infinite wisdom, understands that even capybaras require a chance to get off-site and restore their tranquil harmony. Off-site IO handles the stress of planning great company retreats, saving teams money. We handle the details, you just chill. Off-site IO, your off-site event planner.
I think that's pretty funny. We also have, obviously, the Wes Anderson one we did for Ryz Labs.
¶ Features We Built After Real-World Use
This is Kyle, our DevOps guy. He lives in a tree now. Something about latency and forest spirits.
This is Jess, our designer. She's never used Figma, but she does love clip art.
My CTO? That's Travis. We met at Burning Man. He doesn't believe in computers. Anyway, I found Ryz Labs. They just show up and stuff gets built.
So, you know, we have all these good commercials and I think it's only gonna get better. One is, you know, we get better at generating better content, our team gets more creative, and the models continue to get better. But just before I show you kinda how I built it, Sam, let me show you this new feature that I don't even think you've seen yet. So, I recently just added a generate images page. I've taken, these are the top four image generation models.
Obviously, ChatGPT has a model. I haven't integrated it yet, but probably will soon. But here you can see these models. Image generation is obviously significantly faster than video. Obviously, this is just one frame. But we can pick something like flux pro. And I think we had a dog skateboarding in Venice, Italy. I also have an enhanced with AI here, so hopefully that'll improve. There we go. It also picked a golden retriever.
I guess those are the most popular skateboarding dogs. And, here you can see that it'll generate this image and like I said, this one's a lot faster. And then, know, we have an image history page and, just like that, there's our dog skateboarding.
Beautiful.
Through the streets of Venice since you can download that. So, yeah, you know, still some bugs and some, you know, little things here and there, but for the most part, you know, and if I was just to, like, copy this prompt and, you know, we can try it in something like Google's, We can see, like, what that result is.
And this is great because we have you know, our entire team now has access to all these different tools on a pay per use basis instead of, you know, providing subscriptions to eight different tools to, you know, a bunch of different people. It's way more affordable and like you can see what everyone else is you creating and, you know, use that to build inspire you, build over different ideas, tweak it, etcetera.
Yeah. I purposely built it in, a collaborative way where everyone's kind of sharing one major account. That way, you can kind of see the history of other people inside the company who who've generated content. But, you know, alternatively, if we gave everybody accounts, okay. So this is the mini max image zero one. You know, that's a pretty good image of a dog. I mean, it looks pretty photorealistic to me.
Yeah. It's great.
But, yeah, if we had to go give everybody accounts, at, you know, RunwayML and Cling and all these places times how many people use the tool, then we'd be spending hundreds of dollars and maybe not even getting, a lot of generation out of it. So, yeah, that's a little bit about how the Ryz Labs audio video image studio works.
Okay. Perfect. Yeah. Quick take me quickly through kind of how you built it. And then number two, you know, I want the kind of final polished version of, you know, it's cute making dogs skateboarding in in Venice or, you know, cows jumping over the moon, but how do we actually use this for business purposes?
So I'd like to use a lot of different, Gen AI vibe coding tools. I have accounts at, you know, almost all of the different major service providers, and different tools. But, in the case of this specific tool, I used Replit. So let me just tell you a little bit about Replit. Replit is a a platform that allows you to kind of vibe code, text to chat code web applications.
¶ Capybaras in Barcelona: AI Ad Examples
What's cool about Replit is they have a lot of built in services into their platform. So they have things like web sockets and database and storage and storage of environment keys and things like that. And so when I'm building small tools that are not front end only, that need a little bit of a database, that need a back end, that maybe need a little bit of auth authorization, but, you know, maybe it's not gonna scale to like a huge organization, I usually come to Replit to build it. When I'm building things where I wanna build like a product demo or just a front end for like my product teams, I'll probably use something like Bold or Lovable or some other tool, or even Cursor. But when I wanna build something that I need like an API and I need to make some some functional calls, I'll go to Replit.
And, the cool thing about Replit is, like, you can kinda chat here with your agent. Here is, a preview of your web application. Here you can see, like, my server is running. And if if we make any changes or add any new features, I can just hit redeploy up here, you know, redeploy to the server. And here you can see if I go to my deployments, you can see my last deployment and then here's a link to the app which will bring me like right into here.
So if I wanna add a new feature function model, and I can walk you through that now, I can show you how I do that. I think that might bring me into like what replicate is and how I use that in this case to help me build this. But yeah, any questions here before I jump into that?
Yeah. How thoughtful on your initial prompt to get this started, how thoughtful are you? Are you like kind of drafting a full series of prompts using Chachi PT or Claude before taking that over or you just kind of vibing straight into it?
You know, sometimes I write like a really nice prompt in ChatGPT. Sometimes I'll just start with like I'm making an admin panel and just get the like, you know, framework of the web app like built. I I tried to see here if I can, you know, find my first first prompt, but I don't know if I'll find it. It's probably buried back here somewhere
No so
nobody knows. But yeah, you know, I probably just have to build me a web app. Yeah. Sometimes I'll say, like, build a web app with the left hand sidebar just to get a framework going, and then I'll say, like, okay, we're gonna do this. Sometimes I'll say, like, what I'm gonna do is this, and I'm gonna have it do this.
So I think in this case, I probably said, like, you know, we're gonna build a web admin panel, left hand sidebar, and use different models to generate video, something like that. Most likely, probably with this one, just to recall myself, like, most likely went to chat g p t, probably something that's, like, four o, pretty simple, and just like gave it the idea of what I want and said like, I'm making a new app in Replit, like, please help me like improve this first prompt.
¶ New Feature: Image Generation Walkthrough
Okay. So this was your primary tool. What other tools did you use to tie this together?
Yeah. So, in the case of this specific application, I used a web tool called Replicate, and Replicate's a very, very interesting tool. Let me see if I can show you kind of a little bit more of how it works. So replicate, and there's a few other, like, alternatives to replicate. I think another one is fall, and you could also use something like BWS Bedrock, there's a few other, like, you know, companies kinda trying to provide these types of services.
But what they are is websites like API as a service, and they're constantly adding new models. And so what you can do, is first see, like, look, they've got tons of different models under all different types of things. So, you know, I wanna generate images, videos, I wanna edit images, I wanna transcribe speech, I wanna make three d stuff, I wanna make music, you know, there's just so many so many different, types of things. I wanna sing with voices, I wanna do voice cloning, and these are all different models that are hosted through Replicate, and allow you to have API access in in just a few moments. And, you know, maybe if we have enough time, I can maybe, like, add a new feature into our video generation tool showing you with with one of these APIs.
But, you know, if you pick any of these models to use in the case of c dance what we used earlier, you know, you're gonna get an interface something like this with the playground. And in that playground, what you can do here is, you know, type in some prompts and plus press play and run and you'll get your sample video. But where it gets, like, very powerful is you can go over here to the API page and, you know, here, I'm not gonna expose my token, but, you know, you get your API token here, You get a sample of of the call API call, and then you also have the schema both in, a table and a JSON. And so as simple as just literally, like, kinda copying, like, all of this and then as well as copying the schema and then just going into Replit and saying, like, hey. Please add this model here.
It would simply add, this model right into my app and then pretty much work right out of the box. You know, sometimes I have to go back and forth, make some testing as you guys saw, you know, maybe earlier with the gen images. Looks like, you know, some of these models failed, so I'm gonna have to go back and, you know, kind of understand why they failed. Here you can see images disappeared, so, you know, they're maybe not storing on the server properly, but, you know, I'll keep working on those features and and get it better and better. But, happy to walk through like adding another model if that wasn't clear.
No. Think it's clear, but so every time we generate a video, we're paying replicate.
Yeah. Replicate we're running replicate services, so, I'll try to see if I can, you know, show that or I think it's on my dashboard maybe. Let's see. So here you can see kind of, you know, every time we run a different model, you know, a few minutes ago, we did this mini max image model. Let me see if I can show you the the cost.
You know, here's the dog in Italy and here's a link to that. Right? Mhmm. So here that API call succeeded. I don't know if it has a cost here, but it does somewhere in the app. Yeah. Perfect.
So cool. I think what I wanna transition to is like we've gotten a little bit into why we built this tool. We've gotten a little bit into how we've built this tool. How have we used this tool? I know it's only been like three days, four days since we've released it to our team.
¶ Building the Tool Live in Replit
We've iterated, we've created a couple new features, we've got a few more features in the pipeline, but, how have they used it? And what have we done with that in the in the past forty eight hours, let's say?
Yeah. Well, I think like the first, like, major hit was I made like a Ryz Labs, the kind of Wes Anderson style commercial, and we pushed that out, we put it on socials and got some pretty good reviews. And I think part of it is that it's still so early for AI video generation. I think, like, the more realistic things look when you see video, the more surprising and shocking it is. Like, this is real or video generated, AI AI generated.
So I think we're still in the early days and we could still capture some of that magic in the air, I guess. And then just yesterday, you know, one of our video editors made a awesome awesome commercial. It was, you know, almost like a National Geographic style commercial about capybaras and how they travel to Barcelona and go on an off-site, you know, leveraging our off-site deal business. And that also has gotten, you know, great responses mostly because it's funny. It it has a voice over of a sound like like sir David Attenborough, and it's just it's funny.
It's a funny cute video. We can play a clip of it right now. But, yeah, I mean, capybaras are cute and funny, and, yeah, this is just like day one, day two for us using AI video generation. Think, you know, we produce a lot of videos every month, and I think it's only gonna increase the amount of content we're able to build. The good part is, like, you know, we can generate these videos in the background while we're doing other work. So Perfect. Yeah, think it's certainly gonna scale.
Yeah. Just to, like, walk through that, and then I do wanna move on to the tool of the week. But, you know, our our video editor, once given access to this tool, generated probably 20 to 30 clips, you know, aligned with this concept. Had it done within, you know, an hour maybe, stitched them together, layered on the voice over, presented it to us, few small edits, and within a day we had a commercial that previously would have cost, you know, thousands of dollars or just been too time consuming or costly for us to even pursue.
Yeah. I mean, if you want to have capybaras doing something, you got to use CGI, which means you got to like render them in three d and you got to put them in these environments and, it just takes a lot of time plus the rendering time of just like anything you do in CGI just takes time to render and render farms and I mean, this is all done in a matter of like, pretty much like a few hours.
Great. Let's transition to and this is somewhat unrelated, but I guess in some use cases could be related. What is just one of your favorite AI tools? I know it's probably, you know, a tool that's been out there for a while. People may have heard of it, but maybe they haven't. I encounter a lot of people who haven't heard of this tool and absolutely love it immediately. Tell me about one of your this this favorite tool that we use quite frequently.
Okay. So are we jumping into tool of week? Right? Yeah. Okay. Cool. So let me just open, you know, something like a new doc just so I have kind of an empty space to to type and write. And I'll just make it a little bit bigger. But, you know, I type like pretty fast. I've done some typing tests.
I do about like 60 words a minute, which I think is pretty good. I'm definitely on like the 80 or 90 words a minute, which is really hard, but I definitely seen a lot of developers or even non developers like be able to type that fast. But I can certainly talk fast and so, you know, dictation on the Mac has been around for pretty much for a really long time, like, you know, Nuance and Dragon, there's been a ton of applications around that. I think Apple bought that company a long time ago. And so when I first tried to, like, add dictation into my Mac, I turned on, like, the Apple dictation under, like, accessibility, and it's really clunky.
¶ How the Team Used It to Create 2 Commercials in 48 Hours
It, misses words. It's kinda like on my iPhone. I kinda have the same type of problems. Like, it's a little clunky and, like, miss words and things like that. And then, yeah, I was reading on Twitter about like a vibe coder who uses this tool to like help him vibe code and yeah, basically what it allows you to do is simply, like, talk to your computer and it types.
I'm gonna show you the website real quick. That's called Whisper Flow. Announced yesterday, they just raised a bunch of money, so, you know, maybe this is, perfect timing. See here, raised, 30,000,000. It's pretty cheap. I think it's like $10 a month a user, something like that. Maybe they they changed their pricing a little $12. Yeah. So, you know, not too bad. And let me just show you how it works.
So what you do is you like assign a key on your keyboard and, I'll show you an example. But, once you hit that key, you'll get a little hot button down here that'll talk. And this is just a quick example of how using WhisperFlow will allow you to type and put text on the screen faster than typing it. And so all I did was just talk that out and I was able to get a perfect a perfect sentence without having any errors. And what's really cool for me is I'm bilingual.
I speak a lot of Spanish as well. So with WhisperFlow, you can put on multiple languages. So you can say, like, whisper flow And so I can just do something like that, and, you know, it'll basically do it really quickly and certainly much faster than I type. And I think according to the WhisperFlow app, I'm about like a 160 words a minute. So, you know, three to four times faster than if I was typing and, you know, still probably two times faster than the fastest people who are typing.
I found it works really great for me for writing emails. It you know, sometimes I don't wanna write emails because I just kind of, like, don't wanna type that much out, so it's really helpful for that. Also, it's helpful where if I'm writing an email, I might talk it out and then use ChatGPT to help me, like, improve improve the email before I send it. It's obviously great for vibe coding whether using Cursor, Bolt, or Lovable, or some of these other tools, but, you know, simply just talk right back to your application. And I think it kind of all brings back to kind of what I showed you earlier using Repla, which is, you know, I I use I use this just all the time, you know, inside the agent tool just trying to find improvements and things like that.
I'll just talk to it. So, yeah, I found it super helpful. Sometimes I'll not click in the right box, and I'll I'll miss miss talk, but they recently added some features where you can kind of like copy and paste your previous talk out loud. So, yeah, I found it to be a great tool. It's really helped me move along faster. I don't know. What do you think, Sam? I mean Yeah. Has it helped
I mean, you introduced Whisper, I don't know how long ago, probably three months ago, four months ago and it's a tool I now use daily. I mean, what I love about it is when you have to chat with ChatGPT or Claude and you have to provide a lot of context to your prompt in like really any use case, like the the more context and guidance and structure you give the prompt, usually a better outcome. And typing that out can just require minutes and minutes of effort. Whereas if I'm just speaking it out, I can kinda get my thoughts all out. Maybe I just, you know, ship as is or just go in and tweak and edit a little bit.
¶ Favorite Tool of the Week: WhisperFlow
The other really cool feature that I love about it is I think it's I don't remember the exact terminology. I think it's called dictionary. It begins to learn specific terms that you use. Specifically for us, we have business names that, you know, may or may not make sense. But, you know, once it learns that that's your business name, it captures it quickly and and records it quickly.
So I find that really helpful. And overall, yeah, I just I think, you know, even in Slack, sometimes I have a somewhat long Slack message I wanna write a coworker and it's just easier to to whisper it out quickly. Punctuation is great. So overall, I think it's a a great tool to significantly increase productivity and actually get more from AI.
Do you think like this is something like Apple could like Sherlock, like Apple could improve their like voice to, you know, text to speech or voice to text or whatever this is, I guess, and kind of eliminate the need for whisper flow?
Absolutely. I mean, they're, you know, a company that's fully capable of doing that, but, you know, when are they going to do it? I mean, look at Siri, there's so many improvements that Siri could, happen. I know totally kind of different different solution, but, know, people are using Whisper. They just raised $30,000,000, I think yesterday. So, yeah, I do think Apple could could significantly improve their dictation. But when or if it's a priority for them, who knows?
Cool. Alright. Alright. Should we jump in the news?
Let's go into the news. Let's do a couple stories. I mean
Okay. Cool.
If if I if I recall correctly, these are random stories you've chosen, which I may or may not have heard yet. Fresh off the press.
Fresh off the press. I mean, some of these are just like just came in right now, actually. I mean, this one's probably the most relevant. Calcchi, just raised a $100,000,000. I don't know if you knew about that.
Calcchi is a online marketplace for for for betting, and they're actually the ones that first produced a really cool AI commercial that went viral just about a week ago. So, news just came out about that. I don't know if you've used Calci or you've made any of those kind of bets, but curious what your thoughts are there.
I haven't. I think I came across, a prediction for the the New York primary race that just happened, and I do believe their their most recent kind of video that got a lot of attention was a little bit of our inspiration to, kind of, dive deeper into our own video generation tool.
Yeah. Totally right. So, you know, there's a few other players besides Kalshi in this, kinda gambling market of betting on events and activities. I know Robinhood is trying to scale up in that market as well, and there's a few other players. So, yeah, that'll be super super interesting.
I wanna jump into, the next kinda top story, of the day. I think it was, I don't know if you've heard about this, but, it's basically the reason why, OpenAI had to take down some of the posts about the Johnny Ive acquisition was because of this lawsuit regarding the words IO and I y o and this guy Jason Ruglow. So just to give a quick background here, but OpenAI recently acquired Johnny Ive's company called IO. And a few days ago, most of the content around the IO acquisition was scrubbed from the website, and now we learned that the reason for that was that, they're getting sued by a guy named Jason Rugolo. I hope I said that right.
But what yesterday I thought was most interesting, Sam Altman posted a bunch of emails of Jason Rugolo saying like, hey, can I work for you? So I think, I just wanted to bring it up as kind of like just showing kind of like, you know, how everything's an attack vector, you know, like they did this big acquisition and I can't even close it without, you know, getting some lawsuits in the way.
¶ AI News: Kalshi, OpenAI Lawsuits, Ring's AI Update
Yeah. There's always two stocks, two sides to every story, you know.
Yeah. Okay. Let's just, run through just two more quick ones, and then we can move on from there. Also came out this morning is, Ring, my old alma mater. I just rolled out video descriptions, an AI powered feature to give, users summaries. I'm a Ring customer. I think you're a Ring customer. You're Ring customer? Nest. No.
Google. Nest? Okay. Cool. So anyway, I'll give you an update next week on kind of how well it works, but, in general, the cameras have been pretty behind in in developing AI, and being effective like that. So hopefully, kind of with this new rollout, it'll be a little more interesting.
So would it be if I get a little motion on my doorbell camera, I could say, oh, it was your dog, you know, fetching a ball?
Well, think it does that now. I think it's more like you're away from your home and, you know, you receive three packages, a neighbor came by, there were two dogs, kinda giving you a more holistic summary. I'm not sure. I didn't work on this product. Obviously, I've been a long time away from from Ring and Amazon, but, Jamie's been back there.
That was in the news a few months ago, so, you know, maybe he's pushing this AI AI trend. Okay. I just wanna talk about one last story. You know, there was a recent lawsuit with Anthropic and a piece of lawsuit came out regarding anthropic scanning books. And I don't have the exact quote in front of me right now, but, you know, from from a high level, they were buying used books on the Internet, they were cutting the books up, they were passing them through scanners.
And I just think, like, you know, one, this is incredibly, you know, interesting way to get you know, unlock data and make your model smarter by doing you know, essentially do things that don't scale. And it's crazy that, you know, these big LLMs are just pretty much doing almost anything to get access to to data that's not like readily available on the Internet that they they haven't already captured. I don't know if you know about the story, but I'm just curious of your thoughts there.
Yeah. I mean, you know, to me what immediately resonates is like, you know, the behind the scenes, like how the sausage is made and I can just picture this warehouse of, you know, I think they spent millions and millions of dollars and I don't know exactly how many books that would be, hundreds of thousands of books, potentially millions. And, you know, teams of people just ripping the the the pages out of the seams, you know. I don't know how organized it was or how chaotic it was, but I just, you know, what a what a race to to try and capture this data.
Yeah. That reminds me of like early Amazon Bezos days, like, he always makes that joke of like they're like boxing up books on the floor and someone says like, you know what we need? And he says like knee pads and the guy says like, no, we need tables. So I mean, I just can't imagine like your job is working at Anthropic and you know, your job is to like cut open old books and scan them, but just Yeah. Yeah, it just shows the thirst for like, you know Data.
¶ Final Thoughts + What’s Next
Unique data sets. Yeah. Yeah.
Well Jordan, thank you. That was our inaugural episode. As mentioned, we covered a tool we built in just a few days. We covered a hot tool we love to use and we covered some latest news. Any final thoughts before we wrap up?
Yeah. This was really fun. Went by real quick. I'm really excited, for future episodes. I think we'll be dropping episodes every week. Ideally, kind of maybe every Friday, we'll we'll try to do our best. And, thanks for listening. You can find us on all your top podcast networks, Spotify, Apple, YouTube, and, we'll, catch you next week.
Thanks for listening.
Thanks.