385: The Balancing Act: Free Trials, Value Demonstration, and Business Sustainability - podcast episode cover

385: The Balancing Act: Free Trials, Value Demonstration, and Business Sustainability

Apr 11, 202516 min
--:--
--:--
Listen in podcast apps:

Episode description

Just how much of your service should a trial user be allowed to "try"? When it costs you real money to supply your product, when and how do you apply limits so that people have to upgrade?

That's my challenge this week, and I'll share a concrete example from my journey with Podscan.

The blog post: https://thebootstrappedfounder.com/the-balancing-act-free-trials-value-demonstration-and-business-sustainability/ 
The podcast episode: https://tbf.fm/episodes/385-the-balancing-act-free-trials-value-demonstration-and-business-sustainability 


Check out Podscan, the Podcast database that transcribes every podcast episode out there minutes after it gets released: https://podscan.fm
Send me a voicemail on Podline: https://podline.fm/arvid

You'll find my weekly article on my blog: https://thebootstrappedfounder.com

Podcast: https://thebootstrappedfounder.com/podcast

Newsletter: https://thebootstrappedfounder.com/newsletter


My book Zero to Sold: https://zerotosold.com/

My book The Embedded Entrepreneur: https://embeddedentrepreneur.com/

My course Find Your Following: https://findyourfollowing.com

Here are a few tools I use. Using my affiliate links will support my work at no additional cost to you.
- Notion (which I use to organize, write, coordinate, and archive my podcast + newsletter): https://affiliate.notion.so/465mv1536drx
- Riverside.fm (that's what I recorded this episode with): https://riverside.fm/?via=arvid
- TweetHunter (for speedy scheduling and writing Tweets): http://tweethunter.io/?via=arvid
- HypeFury (for massive Twitter analytics and scheduling): https://hypefury.com/?via=arvid60
- AudioPen (for taking voice notes and getting amazing summaries): https://audiopen.ai/?aff=PXErZ
- Descript (for word-based video editing, subtitles, and clips): https://www.descript.com/?lmref=3cf39Q
- ConvertKit (for email lists, newsletters, even finding sponsors): https://convertkit.com?lmref=bN9CZw



Transcript

Arvid

Hey, it's Arvid and this is the Bootstrap founder. This episode is sponsored by paddle.com, the merchant of record that I've been using on all my software projects. Paddle truly is more. That's m o r, merchant of record, because their product allows me to focus on building a product myself that people actually want to pay money for. And not only that, but Paddle regularly sends me emails saying that they recovered a payment from a customer whose credit card had expired.

I didn't have to build anything for this. They just do it. And that's how Paddle does more for you. Right? They deal with taxes. They reach out to customers with those failed payments and recover them and they charge in people's local currency. It's amazing. Check out pedal.com to learn more. That's MOR. Now I've been running into an interesting challenge in the last few days with Potscan.

I have to make a choice that is both technically feasible and scalable and has to be balanced. A balance between giving my trial customers precisely what they want and being able to keep my business profitable at the same time. So let me walk you through this situation because I think it's perfect to just illustrate the kinds of decision that we as founders have to make all the time and I just want you to be part of that for me right now. Podscan has only recently turned profitable after being unprofitable for about a year while I've been building the product. It's a business that is quite highly reliant on AI systems which is expensive both for the basic functionality that Podscan really needs, like transcribing podcasts, and for additional features like sentiment analysis or other smart things that people need done with these transcripts that a normal algorithm couldn't easily handle.

We need AI to do this. And I use AI in many, many ways inside of Podscan, not just for coding it. That is probably a topic for a different day. But I have my own servers running local language models and occasionally I use external services. And between both of them costs, because of the need for computation can add up quickly.

And the specific problem that I'm facing right now is that I want to show off these AI powered features to my prospects, the people who sign up for a trial. I want them to get this wow feeling that Podscan is super valuable as fast as possible and as much as possible. And the service right now allows anybody who signs up to set up alerts for specific mentions in podcasts when a certain word is mentioned, they get an email a webhook notification it's really cool but in many cases and this is where AI comes in just checking for keywords with a simple text search is fine and very cheap but the real value of Podscan comes when the user wants context aware filtering and that's the AI part. For example someone might want to track mentions of the word yield but they don't want to have any results about farming or traffic controls or yielding or the yield of crops or whatever they specifically are interested in yield as a financial term. And to figure this out I need AI centric context analysis of the conversation around the word yield and that is way more expensive to run than just looking for a string in a text.

Now currently for my trial customers, I limit this feature, this context aware filtering, to about a dozen attempts per hour just so I can show it, but it doesn't really bother me financially. Now my paid accounts, the people who actually subscribe to the product, they get hundreds, if not thousands of these checks per hour if they need them because they're paying for the infrastructure on which those checks run. But for my free users, only the first ten to 12 first dozen or so context checks per hour are processed and anything after that gets ignored until the hour rolls over. This creates a situation where over a floating 24 window over one day, there are only like twenty four moments when credits are available for these trial accounts. Podcast episodes that trigger certain keywords during those times, they are scanned.

But if they don't contain the relevant context and better matches come in later when the hourly limit is already reached well those better matches get ignored for trial users because the credits are already spent for the hour. And the result is that for some trial customers with very specific needs those needs are exactly what Podscan is designed to solve I can't effectively demonstrate Podscan's value without it getting expensive. Some people need to find a very specific use case for a very generic term and they struggle with this solution a lot they don't know of any other solution that could do it people tell them hey POTScan can do it they sign up and they try it but there are no hits because there are so many misses and credits just stop after a couple of those showing the value that I could have might cost me hundreds of dollars during a customer's trial period who might then convert to a plan that's only one tenth of that cost per month if I were to give them all these credits. And that makes the customer lifetime value sync while the customer acquisition cost balloons and that is not really enjoyable.

So this is an interesting intersection between a technical problem, a business problem, and a financial decision. That's why I'm sharing this with you. It's a technical challenge in how do I keep people from abusing the system. I've had a trial user set up alerts for high frequency terms like what was it AI? Just the term AI with context filters checking for do people talk about making money with AI.

And those are terms that could trigger thousands of checks an hour because it's just so much chatter about AI. And to always check for people talking about making money with it for a trial customer for days, it's not working. The second problem here, the business part, is that it's a value demonstration challenge. Because from a customer's perspective it's really cool to sign up and then set an alert and immediately see this wave of relevant data flowing in from all over the world immediately. This immediate value demonstration I think is crucial to people's first impression of POTScan.

So that's what I want. And ultimately it's a financial challenge as well. I can't spend hundreds of dollars on a trial user who might never convert or might convert on this $40 per month plan for the next year. That doesn't work either. This is generally a problem with free trials, right, that you want people not to abuse it, you want to show value quickly but you don't want to spend too much on them either.

Or it's a problem not just for free trials but even with its more deranged cousin the freemium model too. I remember a story from Josh Pickford when he was still running Baremetrics and that was a couple years ago. They had introduced the freemium model to BearMetrix and immediately people started importing years of historical Stripe data. A lot of people for free, right? The servers had to keep up with this and they started smoking because the amount of data being crunched for those free users the gigabytes per minute for hundreds of new customers who might never even pay.

Freemium models run this risk a lot and even free trials face challenges like this as I am experiencing right now. There is a balance to strike between all of these things. My challenge is to find an effective balance and I want to share my thinking here at this point because I believe we run into these issues all the time as solopreneurs, indie hackers, small teams, bootstrappers we're always budget constrained time constrained but we're also resource constrained in ways that force us to make judgment calls about how much we want to give to free trial users without overspending. We don't want our paid users subsidizing the experience of trial users too much because we want paid users to pay for their own experience and they might subsidize it a little but we certainly don't want people coasting buy for free for weeks just because our system can potentially generate valuable data for them and we don't stop it at some point. My approach here that I really want to focus on is to be as quick as possible just to demonstrate value but also as fast as possible to throttle a trial user's resource expenditure.

So what I'm thinking about implementing and I bet I will already have implemented it by the time that you actually listen to this is a system where I will allow people on a per account basis to use resources just like a paid account, but only until a certain condition is met, until a certain number of results are reached for each alert. And I think that is a fair compromise because let's say you set up an alert for a keyword and you get your first one hundred results within two hours. That's amazing. I think you can see the value immediately. Then because you're on a trial then the limit kicks in and tells you you're on a trial, you got your first one hundred results, we're now throttling you down to 10 to 12 of these AI checks per hour, you've had 200 or whatever per hour until now because that's what a paid user would get, you tried it out, but we've reached a number of results that is significant enough to show you what you could get in the paid version no pay up.

That's what a trial is for, right? To show people what the paid version would get them, not to give them your service for free for a week. The benefit here would be front loading the time to first value the thing that I really care about. The drawback on the other hand would be that people will still have trial time left with limited functionality But if they want to keep receiving that flood of data they can just upgrade to the paid plan. And I think that's what every SaaS business ideally wants for people to upgrade to the paid plan as fast as possible before the trial ends.

So I don't think it's really a drawback for the customer. They got to see the value because it flowed in they experienced this massive avalanche and now they can upgrade if they want to keep the data rolling in it's kind of luring them in with the actual quality of the product and then saying okay you're done checking it out Now purchase. I think that's a fair change to make. Now, obviously, this is not the only option. There are many other things.

I've considered a couple things here over the last couple days. One is manual limit increases where I just see that people need more and I give them that configuration option that I have on each account. And this is something that I occasionally do when customers talk to me about it when they really need it. And I'm like, okay. For you, it's fine.

I know you're not gonna abuse it, but I think it's an overly hands on approach that doesn't really scale well and I want something that automatically works for me. Alternatively we could do some kind of selective approval. I could do some value judgment on whether an alert setup is smart enough and then confirm larger limits for those alerts, but that feels fiddly too. And it doesn't really align with my business goals that I want people to try it out and then purchase quickly. I want people to actually buy it and then I will give them all those limits.

So I think basing limits on the number of results per alert is the better way forward for me. And the only risk that I see and I highly highly recommend looking into risks when you build these things out is nonsensical alerts. If I only count positive findings, I only say your first one hundred positive results are free and the alert never finds anything but always keeps looking it could still be a constant drain on my resources and that could be problematic with people who have very generic keywords like the AI person. Right? If AI is a keyword and then you tell the context aware question logic that I have that you only want to get results for podcasts where a person with a certain name mentions the word AI four times or something weird like that, well you're never going to see results but it's constantly going to check.

So I need to ensure that let's say if a scan runs I don't know a thousand times with the AI component and not a single result has been found I can determine either it's bad luck or misconfiguration. In either case I need to cut down the limits here at this point. So I need two limits. I need a limit on the number of high quality results found and a backstop where if no results are found after a significant number of attempts, I reduce limits as well. And I also wanna maintain the ability to manually override these limits for specific customers when appropriate, but that is, you know, that is just to be expected.

And I'm sharing this very technical, very product centric thought process with you because I want to show you how nuanced it can be to make a business decision that is also a technical and a financial decision at the same time. We are running into these kind of challenges all the time, at least once a week for me, if not every couple days. There's always something like this. I've been talking a lot about making data available to my larger customers who want summary data and data exports. I've been building a system that exports each day's full podcast episode collection like tens of thousands of episodes into one big file so that companies can download them when they pay for it and it raises a similar question.

It causes a lot of database queries and storage requirements and that needs to be balanced and needs to be priced appropriately. Right? I can't just make features available now because I can build them. I need to make sure that those who are willing and able to pay for the resources that they are consuming get the product and nobody else. So this full export that I've built, I'm not going to make this available to anybody at this point because I want this to be a paid add on for customers who need to look into the history and get these full data exports.

It's always a balance between the resources that you have and the resources that other people have, which in our case is money. The revenue expense calculation needs to fit with every single decision that you make as a bootstrapper as somebody who does not have infinite budget. So I hope there's something in here today that you can apply to your own business maybe a problem you're facing right now how you can present value as quickly as possible while ensuring that users understand they should upgrade if they want to keep accessing the value that you've just shown them. Just because you offer a seven day trial doesn't mean you can't limit it. You can limit a trial and if users need more they can upgrade.

That's the logic here. You could just as easily have offered a two day trial after which they would have to upgrade. It's all completely arbitrary and your choice and so is the choice on when you stop delivering your free product to people when they have seen enough. The key argument here is charge for things that cost you money, but allow people to see what paying you would give them. That's the balance.

That's what a free trial is all about, and that's the decision framework that I apply to this problem. And that's it for today. Thank you for listening to the Boots of Founder. You can find me on Twitter at arvid kahl, a r v I d k a h l. And if you wanna support me in this show, please share PodScan.fm with your professional peers and those who you think will benefit from tracking mentions of brands, businesses, and names on podcasts out there.

PodScan is a near real time podcast database with a stellar API, so please share the word with those who need to stay on top of the podcasting ecosystem. Thank you so much for listening. Have a wonderful day, and bye bye.

Transcript source: Provided by creator in RSS feed: download file
385: The Balancing Act: Free Trials, Value Demonstration, and Business Sustainability | The Bootstrapped Founder podcast - Listen or read transcript on Metacast