Welcome to this very first episode of Beyond the Prompt. I'm Henrik Werdelin, here together with my co-host Jeremy Oddly, in a podcast that explores our companies and leverage AI to streamline operations and better serve their customers. We're thrilled to have Greg unpack real-world examples of using AI to increase productivity and making a better company. So a very warm welcome to you and to Beyond the Prompt.
here's crackings my career is the typical silicon valley career 30 years of hard labor No unicorns. Some accents. good quality of life financial security established for our family not easy lots of pivot Lots of heartache. Lots of dead ends. That's typical. It's not what they sell us at Stanford Business School or what the media, the myth that creates about building companies. But fired from Apple. It was my first job out of business school here in the U.S. I came from Toronto.
And after that, serial entrepreneur. Just by way of background, if you want to talk briefly about your role at Section, and then specifically, I think where we want to go is What have you been trying? I know from our interactions that you felt real passion and purpose.
around being forward-leaning with AI. I would love to understand maybe how you came to section, when the AI moment dawned upon you, and what were some of the early decisions you made, and then we can take the conversation from there. Yeah, great. Scott Galloway is a close friend. I've known Scott for over 30 years. He'd started the company, had raised a seed around, and initially had launched the company as a media business.
And I came on about a year and a half after it was launched and decided to pivot it to an edtech business, maybe the second worst business model.
in the digital landscape. But Scott's a great professor, a great educator, and I thought we could marry his abilities with you know lives uh video based learning live not asynchronous so i felt that online learning hadn't yet fulfilled its potential so that's achilles heel was that still no one really wanted to do it and no one really showed up and completed
and the metrics for platforms like gdmi linkedin learning i think confirmed that over the last 10 years and so i thought we could build a different experience without you know kind of a higher bar for the student in terms of learning outcomes
so we did that we enjoyed an incredible growth spurt via the pandemic amazing catalyst right yeah amazing catalyst and a false signal to some extent right like other companies that enjoyed that kind of boost right uh when it was over there certainly was a hangover we had raised
a fair amount of capital which we invested in marketing but also in curriculum development so that money was well spent in terms of building out our curriculum and and here we are now really figuring out how to build a capital efficient learning business that supports the enterprise. We still serve consumers and they're a great way to access. Happy consumers are a great way to access enterprise customers, right? So we serve that consumer enterprise flywheel.
diving to the ai moment you're on this journey you experience the bump and then the hangover and all that where in the midst of that can i call it ceo heroes it was simple it was simple for me the moment was I had decided This year, I'm two years cancer-free. I decided this year to work for 10 more years. I hadn't yet decided with what level of intensity across the whole 10.
But I decided to work intensely for the next five years and at least somewhat intensely for the next five for a total of 10. I'm 62. And I realized that basically AI could be testosterone for my brain. I realized that if I was going to be as productive as I wanted to be, I also set a goal of creating $10 million of additional family wealth. From my kids and grandkids.
And so when I made those decisions and then basically started playing around in December, January timeframe with AI, I thought, okay, if I could combine my ambition and my experience with the cognitive boost. of AI, then I'd have a chance. I'd have a much better chance of making the impact and having the productivity and contributions I want to make over the next 10 years as Silicon Valley is a tough game. It's an ageist place.
place right we discriminate against people over the age of 40 amongst others not the only so anyway i just felt like it was a chance for me to add a superpower so what did you see That made you feel testosterone for my brain. What was it? Do you remember the moment where you go, oh, It was two things. It was the lack of friction to get an answer.
and my point of view and this is one of this will be the one of the lasting impacts so genitive AI or this chat interface which is will interface friction will not be tolerated going forward right we're gonna expect and and we're gonna have these sort of magical experiences where we get the answer
that we are looking for in a way that just has less keystrokes, less cognitive load, less effort, less hassle. So the moment was playing with GPT and where it felt like, okay, that was equivalent of four or five searches. done in one right and the second was quickly thinking about it as a thought partner so i went i was able to quickly realize that it's not search
It's brain power, right? Or And specifically for me, thought partnership and this idea of approaching something from a different angle. but by my age you have a lot of bias you have a lot of experience but you have a lot of bias you have a lot of set beliefs you have points of view and i just find ai so refreshing
I'm invigorating it in terms of allowing me to come out of a problem or an idea just from a different angle, right? Which is just harder to do as you get older. Our brains are less plastic. I know, we hate to admit it. Okay, so... I see all that. Now, what I'd love for you to talk about, and I know a little bit of the answer because of my experience with you,
Did the organization see it immediately? Does everybody, is it as apparent as it was to you, we need to make a radical investment and exploration? Or did that require some leadership? And if it did, what did that leadership entail? It's the classic CEO, I've seen the future kind of moment and everybody else is like, what? No, you haven't. You're an idiot. You're the guy that did three layoffs. Right? Really? You're the genius in the room?
So I absolutely had one of those moments, probably more than one. Right. And then. The reality of this is everybody's got day jobs. Everybody's busy. The team is lean, right? The capital is less available. And so the team is smaller and working really hard. And this is more work. Frankly, the media doesn't help. Even if you're in Silicon Valley.
But if you're not in AI, you're just, anxiety has really been the primary emotion associated with AI, I think, for most of us for the whole year. I guess when a CEO says, hey, we should do AI, it seems that a lot of people on teams hear you say, yeah, we should do AI because then I can fire a bunch of you.
And obviously that could be an outcome, but it could also be an outcome that you can give people an Iron Man suit and you can make their... the boring kind of part of their job less boring and you can allow them to do more of the fun stuff you know
jeremy i was talking about like you can see the efficiency that ai can give you as both helping you with the top line if you give people the ability to make a better product or the bottom line yeah that's exactly the way i think that's the right way to think about it we think about it in and one of three
sort of modes or approaches which is optimize accelerate and transform and so when i started to play with it and i did the hey look at this at all hands meeting or i was slacking people i checked this out or i was forwarding newsletters and i was cutting pasting links to hey check this out and people i'm sure were like setting up a little folder or whatever a little
hey all the stuff from greg about ai and i'll look at it maybe on the weekend but i probably won't so that went on for two or three months and that so then i realized i need to do something else and so i need to make this kind of both real and and strategic at the same time these are smart people
who work at section and any of our organizations right they want to know like why are we doing this and where is this taking us so i thought about as exactly the same way you do how do we make ourselves more efficient internally how do we optimize can we run the business better Allowing us to do, you know, more of the good stuff and less of the drone work. And that does not mean laying off people, obviously. In most cases, there will be job loss, significant job loss in some areas, right?
But right now, and certainly not at section because everybody's working too hard. But we thought about it as optimized. And I thought about it as, well, how do I make the business go better? More revenue, expand margin, improve the product in measurable ways. And that's the kind of second bucket. And most of our attention went to those two buckets. Transform is, okay, I'm at such existential risk.
or i see such incredible opportunity i might want to move to that kind of mode right i might take a chunk of money and a bunch of people and actually go try and do something that's more transformative. So I'd say I got people on a stand. Listen, I'm not trying to lay anyone off. I'm trying to make us.
operate more efficiently and we're in the content business okay this is called generative ai right it is content and generates stuff content right so we had to get i had to get people there and it took me too long i should have got people there faster it probably took me 90 days because i was doing that kind of co thing hey read this check this out what about this why don't we try that
And I think it would have been more effective to do that for a month and then call timeout. It just took me a while to really figure out this framework or figure out this approach and then get people in a room and say, hey, let's start talking about it a little more intentionally versus cool demos.
so was it like i'm like a real there's just all hands like we're doing now we've heard other ceos that kind of send out like a manifesto what was that tactical kind of approach yeah no no no manifesto yet we don't have a we don't have ai principles or policies yet but we will soon no it was more really running a series of brainstorming sessions
using this idea of optimize, accelerate, and transform. And we took transform off the table. I'm not sure we should have, and we're bringing it back on the table now, and I'll talk about that in a moment. We basically used those two. kind of operating modes, optimize and accelerate. Okay, let's talk about AI, right? So optimize is easy. They're both pretty easy to figure out, I think.
Because you just do an audit of your internal workflows. Ask people to do an honest audit of their internal workflows. It's not that hard, right? Look at your calendar. Look at your Asana boards. whatever it might be, and map out how you spend your day, how you spend your week, and look at these moments where you're doing tasks that we think AI could do at least as well, maybe even better. So with that caveat,
How do you get people to honestly look at the workflows? Because I can imagine a scenario where Greg's saying, we want to see what AI can do. And I look at my calendar and I go, there's a lot of it. How do you keep people from wanting to hide that stuff?
Because you can imagine a scenario where people go, I don't want Greg to know that AI can do a lot of my job. How did you provide that assurance that Optimize is not about getting rid of you, it's about supercharging you? Yeah, listen, that's all about trust. They may not have thought I'd seen the future like I was that smart.
But going through the last three years that we've been through together as a team, they at least at some level, trust me, right? Meaning every time we did the layoff, we were transparent about when we were doing it, why we were doing it, and who was going to be in pen. You've got to earn that, right? There's just no other way. I think it's harder in bigger companies. And I think frankly there is a lot of cynicism, a lot of mistrust.
The relationship is so frayed in so many ways, I think, between knowledge workers and their employers. That's why people have two jobs. at the same time but yeah listen in that respect we're fortunate or i'm fortunate i think there was enough trust to say listen let's look at this and and see if we can get some gains out of it. But it's a great question. I think in large organizations, it's tougher. You're going to have to somehow, you're going to have to diagnose where you're at in that trust.
scale or trust meter because otherwise that will be the behavior you'll get you'll get people like or we see it now in the other way right that salesforce study that just came out from this week i think it was 14 000. Employees write that 64% were not or passing off AI work as their own work.
It makes sense, right? In this context of lack of trust. One of the things we did around the same time, and I think it helped, which was we started acknowledging when we are using AI. And we even did something as goofy as AI shoutouts at all hands. when i started and people were like what you're gonna do like an ai shout out we do like human shout outs right
Every every week and but it was just really an attempt to say to people we're all gonna be using AI It's gonna be okay. We're gonna celebrate the wins We're gonna talk about the losses so we can learn from them and now we're going to move forward together. What's a few examples of something that fell into the optimized bucket? Oh, easy stuff, right? Low-hanging fruit would be transcriptions and translations of content, obviously. Preparing scripts for video shoot.
A lot of the marketing tasks that I'm sure you know about, email templates and building drafts of marketing email campaigns and cadences or sequences. Yeah, stuff like that. Basically what I'd consider VLAN work. Most of Gen.AI today, I think, is the way to think about it is get you off the blank page.
and get a good view on it faster. I would imagine even that have already yielded efficiency. I think that's one of the interesting things with the tools that are available right now. You don't have to do a lot before it becomes something that's really useful. You don't? It doesn't cost a lot.
people tend to think oh this is going to be expensive or it's going to be like the prototype and then a pilot and all that stuff and i gotta it's 20 bucks a month it's 20 bucks and some training right you gotta and help people basically how to get better at prompting it's the gains are so obvious right at that on that right those parts your age question or your age comment earlier キスよー
that because prompting is as easy for everybody, like you don't have to understand Python, do you think this is one of the things that could be easily adapted by everybody? Or is prompting AI also a young person's sport? i don't think it's a young person sport but i think it's not easy in terms of good prompting right so the complex
Contextually relevant and directed prompts, people are now calling them structured prompts, right? I don't think they're natural for us necessarily. The conversational approach is more natural. But the bottom line is I think prompting still is harder than it should be. And we obviously need to move to some version of an agent or something. The GPT's November 6th to me was a kind of watershed moment for AI. I think probably for the board of AI too.
But I'm blown away by GPTs and really showing a path forward that can unhook AI from prompting. basically and bring a real kind of value at a very kind of task or in a task level so at a very sort of micro level in terms of people in their daily workflow but yeah i think that I think prompting is just hard for all of us to really do it well. And so I'm looking forward to prompting less in 2024.
Yeah, you can imagine that being something that OpenAI actually, it's maybe the answer is for AI to understand what we're trying to say, right? Where prompting just gets built in, then chat GPT gets GPT'd. I want to go back to Hannah's question. We did one thing early.
But I think it was a light bulb moment for all of us and I think it's a good idea for others to try. We took something that was very high value and infrequent but high value and applied AI to it to see what would happen. So let me give you the example on it. It's the AI as Thought Partner and AI as a board member. We don't record our board meetings, but we take good notes. So we had a board meeting in the summer where I asked the team to make faithful notes better than usual.
around all the input ideas we got from the board and so we used that and then we used the board deck that we had sent the board as a pre-read and then we basically ran the board deck through four. models, right? Claude, Jimmy Take three and a half, four, I guess five, Bart and Bing. and compared the output from the AI after a bunch of prompts, not just one, but we started with the obvious prompt, right? Pretend you're a board member. I'm the CEO. I sent you this pre-read.
And it was mind-blowing really on how good particularly Claude was. In fact, GPT-4 performed poorly in that moment, I think primarily because of a crazy hallucination. But Claude nailed it and we actually rated Claude as 91%. overlap to the feedback, including quite nuanced feedback about the stage of our company, our cap table and preference.
the kind of growth we'd have to create or margins we'd have to create sort of growth versus profitability kind of discussion right so we were and i've so i shared that with everybody i shared it with the team internally And they're like, what? With that kind of simple prompt, right? And the conversation afterwards with one pre-read AI could do that. I shared it with the board, said, hey guys, it's time for you to bring your A game.
Yeah, you're about to get replaced. Setting up a board meeting takes two hours at least of scheduling effort. The thing's 90 minutes, and then it's 30 minutes afterwards. And then there's some email follow-up. Claude got 91% of what you said. And I got a great board. I got the former CEO of Time Warner. I got board members who are on the board of Moderna.
And I come out like, I've got a great board, right? And so the guys... Are they concerned all about AI taking their jobs? Is that... Yeah, that was one of the responses was, hey, we loved AI until you showed us this, because now it's coming for us. It's coming for all of us. What I love about that example, Greg, is that
If you bring that back to your team, you're showing the team, this is how it's relevant for me. Right. And it's not, I think sometimes you use that word V1 where I think the way that can be interpreted is it's about low-level tasks It's about the junior employees. And what you do with that example is you say, the board deck, just in the org structure, you conceive of it as the highest level task. And you say,
Even there, I'm using it. It helps me a lot. I haven't been fired and nor has the board. And it's just this amazingly elegant example to show people it's not about you not having a job. It's about you doing a better job.
Yeah, I think that's right. And I think, and it is that providing that context as well. Listen, why am I doing this? I'm doing this because and by the way i'm doing it meaning i'm now giving my board decks to claude and gpt4 prior to my board meetings so i can raise my game and what i'm trying to get to people to understand is that meeting will now be more productive right if everybody has not just read the pre-read but now used AI
with the pre-read, right? They're going to be able to come in and hopefully move the conversation forward faster and hopefully obviously bring the benefit of human experience and the years on the planet that we all have. to these conversations right and so that's i think people at that moment okay i get it we're not trying to not trying to do away with me or the meeting we're trying to make the meeting better right trying to get better answers better decisions things like that
so that's why i like thinking about these high value moments or meetings or discussions or decisions or conversations that we have and bringing ai into them as a test case or use case as by the way sometimes it doesn't work and that's part obviously part of the conversation and i think that also helped
People, the anxiety will drop a little bit. Hey, it's not AGI and they're all... Don't worry about all that. These are like... they're kind of dumb they're getting smarter and they often don't work So, you know, we're not going to use it for that. And you're going to have to go back to the old way. And so sort of really highlighting where AI has failed internally, where we've stopped using it for tasks has been, I think, really good as well.
What's an example of a test that you stopped using AI? Well, you know, I used to do one that's relevant to me. We used to do scripting, video scripting. We don't think AI is, the speed doesn't. doesn't make up for the lack of quality so we're still we're now back to writing scripts
fully manually, if you will, human. That's one example that's out of the education team. For me, I used to use AI over the summer and until just a couple months ago, I was using AI to write my monthly email to the board. because i we do a great weekly email summary of the business we call three p's right progress priorities and problems. So I was taking four weeks of 3Ps from my direct reports, feeding it into Claude and saying write me the first draft of my monthly board update.
and it seemed to work well enough or maybe i was just looking confirmation bias i was just thinking it was doing that well enough right because i was trying to show examples of how ai can help but the rallies by september october haley my assistant was like this is just not worth it we're not getting back a good enough v1
let's just go back to the old way of cut and paste and edit so now that's back on my to-do list there's an hour to do you know once i'm on versus i thought i could get it down to 15 minutes and ai do the rest and it's just not working right now for whatever reason and we'll try again in q1 and see if we can
see if we can get it to work maybe we build a gpt for it that's one of the reasons i'm so excited about gpts right because with gpts you can give really specific instructions You could constrain the training data as a way to think about GPTs and make it very task-oriented. so that'd be a great test to build and see if i can then get back that hour of time which is i think how we should be thinking about it right if i can get back these hours of time
over the course of a week, they'll add up to be meaningful. I was curious on a super nerdy question. I've also experienced that for some things I just go straight to clot and for some things I go to the rtpt and for just normal chatting i would use pi or whatever you kind of increasingly kind of like a little bit like on your team have the go-to lm It sounds like you have the same. Could you explain that and maybe try to help create some vocabulary on how to think about
how to use what service for what. Yeah, I think we'll start with the. well here's what i think about framework wise is daily weekly occasional and then in testing mode so i've got this portfolio right and so daily for me and for us mostly at section it is gpt4 and the most muscular model right with the most range if you will and Claude. And for some reason, and I don't know why, Claude seems to be better at thought partner work.
It seems to be better at business. It just seems to have more business-oriented thoughtfulness. I don't know. We think Claude's a better thought partner for people, for executives, for people who work in business. I don't know why. And now perplexity, I'd say, has been added to that list in the last couple of months, really our go-to for more research-oriented, clearly when we want to see the sources. Now GPT-4 is revealing the sources as well, but I think perplexity.
I think owns that at this moment in time. And then, of course, Fathom for note-taking is why I use Fathom. That's probably the go-to daily. And then weekly. For me, I never got my head around mid-journey, and I'm not a creative. And so Dolly, for me, is just so much easier and good enough, right? Although Jeremy's wagging. Sorry, I can't let that comment stand. That's my other life, Greg. I can't abide because people know me.
and even though this podcast is about creativity i can't let the comment i'm not a creative go just like my dad's a lawyer and you have to mark a an objection i'm just marking an objection we can keep going i just want to make sure i could reflect I objected to your comment that you're not a creative, but please continue. Okay. Wow, but can you tell me why you're objecting or was it just like you're objecting?
I fundamentally disagree. We're all creative. We're all creative. 100%. 110%. And by the way, if we share this conversation, with a hundred people and ask a simple question, is Greg a creative person? 100 people will say, absolutely. he's doing stuff i never imagined right so right not i'm mostly just teasing and making that i think one of the cool things for that ai is allowing people to do is to reduce the
the space between people with an insight or an understanding of a customer and the ability to produce an output. And I think historically, The reason why people would define themselves as not creative was because they were not able to produce
the final output that was seen as being something that was visual or music or whatever it was and obviously now where somebody like you Greg can take all the wisdom and insight you have and package it in and you can get midjourney or dahlia or whatever to render that
graph or image or whatever it is i think that actually changed quite a bit yeah no i agree i think about that that's one angle to think about the other is someone who's got the has done the mental math or the napkin math but can't get it into that send a cfo ready presentation right
And they're not viewed as that person that builds robust business cases, but now they can, right? Or they're going to get much closer now, right, with AI. So that's obviously, to me, the magic of generative AI or these models, really. is that they just move across domains, they move across industries, they move across functions, they move across skills.
With such ease and we don't as humans. And so we're in our silos, both skill-based and industry-based. And AI doesn't live in a silo. So it's just so powerful in that respect. So that's how I think about it. So my daily tools or weekly. Less frequent, something like Synthesia. We're playing with synthetic video and voice.
a little bit we'll see where we go with that and then testing for me what i'm testing right now is lex just started with lex.ai to see if i can find a good writing partner or again get off the blank page And then of course beginning to play with the AI features that are appearing in the tools we use every day like Notion or Superhuman.
One thing that I just want to contribute to the comment about Claude being a better business kind of thought partner. I agree. But the challenge is for me, there's so little time where I'm sitting at my desk. And GPT with whisper and voice to me is my go-to thought partner, not necessarily because it's better. The best thought partner is the one who's always available. And I kid you not, like in practical use case, I had a sales call last week with a client.
And I know I've got 40 minutes to do a workout. And if I don't leave now, I know that email is going to suck up my life for the next 40 minutes. So I get up and I go. But I know I've got to send a recap. The fact that I can open up ChatGPT.
all via voice while i'm stretching hey i just talked to dan and so we talked about this and this and i want to send him a follow-up would you mind to take a first pass and a memo that i could send him letting him know that i'm excited that i heard these three things and that dot right oh yeah and don't forget this
now i'm done stretching and the one thing that i probably would have forgotten to do i did with the thought partner who may not be the best but it's the one who's available for that moment absolutely yeah i think that my takeaway from that is
You gotta give OpenAI a lot of credit, a lot of props the last 12 months, the rate, the pace of their... product product advancements or release has just been incredible guys we all know google's slipped gemini to queue on and i'm sure i googled it sitting around thinking shit we get to q1 and we're going to be behind again because openly i will have pushed uh further ahead now that they've decided who the ceo should be again
But yeah, really impressive, right? What they're doing and hard for others to catch up. So yeah, we need a mobile app from Anthropic soon. I can't believe it isn't there yet. So I agree. Folks, Anthropic, if you're listening to this, please. I want to shift gears. Just the last topic that I've got on my mind. Henrik may have others, but I'd love to hear about success stories and I'd love for you to brag on yourself and get practical. What would you say? What is a success? and applying
generative AI to the business section that you feel like, wow, this is a great example of the kind of impact you can have. And then also, I know because of the course, there's loads of examples. If you want to refer to anything outside of section as well, I'd love to hear that too. For folks who are going, what kind of impact can this have on our operations? Are there one or two case studies that you could share that just highlight the practical, real economic value of integrating generative AI?
Yeah, I don't. I want to come at the answer a little differently. I think success in this moment, talking about this moment, right? Because we're talking about 20 bucks a month. Let's be clear, right? So we're not talking about building. We're not talking in this conversation about spending a million dollars or more and millions of dollars to build an AI app or AI product. We're not talking about that at this moment. We're talking about knowledge workers every day.
using AI and it costs 20 bucks a month. so pay for it yourself or get your employer to pay for it if your employer is clueless and won't pay for it then pay for it for yourself and so i think the success metric is different it's about what percentage of your team Is AI ready or AI comfortable or AI competent? I called the AI class. I think the workforce is splitting.
and into two right at the ai class and everybody else the knowledge workforce is about to split And we need ourselves to be in the AI class and the more of us inside of an organization that are in the AI class, that means the organization will be in the AI class, right?
That's clearly to me the challenge of the next 12, 24 months. And in that, if we make that happen, we'll get the successes and we'll get the business cases and we'll get the ROIs. But we're not looking for a lot of ROI because it's only costing us 20 bucks a month. I think that's one part of the answer. Second part of the answer is that there are hundreds of use cases.
And they are specific, meaning the application of that use case to your workflow or your company's workflows are so specific that my examples probably don't matter.
and by the way what i'm looking for i'm looking for one use case that saves half an hour less right if you can't come up with one use case that's going to give you back half an hour in a week or an hour in a week here's how i think about it 100 bucks an hour for a knowledge worker and that's probably at the low end at least in silicon valley but you use that number as just it's a nice round number for what we pay a knowledge worker
our token costs this week for gpt4 are around two cents if i have 500 words right do the math that's a lot of queries to the ai Which means to me several things. My employee should be able to get enough value for $20 a month. That's number one. Number two, I'll add a lot of value to someone who I'm paying $100 an hour to or more, right? Because they're going to be able to do a lot of queries. and get some guy from the A.I. The math worked.
Euclid has been down Euclid, like I think for many organizations, like on the advanced side. A lot of some pitfalls to avoid. Have you had anything where you would like? If I could give myself the advice not to chase that rabbit, I would. i think the pitfalls are the ones that people talk about it
The biggest pitfall is misinformation and unrealistic expectations. The misinformation is coming from media. It's stealing our data. It's going to take our jobs. We can't trust big tech. That might be true. I'm not naive. Listen, big tech's not trying to save the world. Big tech's not really trying to make education or healthcare more accessible. They just want to make more money.
They want us addicted to AI because they want to charge us 20 bucks a month for it. That's the reality of the situation. It's our job to find use cases and frankly avoid some of these pitfalls. But some of this is misinformation. Some of this is just the missed expectations, right? It's being oversold to us. And so I think the first thing I say to any leader is don't oversell it. Be optimistic but pragmatic. Acknowledge that it's not for the anxious.
If you're culturally anxious, if your team or you as a leader are anxious, AI is not going to help, right? It's going to hurt. it is in this moment you and so that means you might want to wait so my advice to some is wait if you're anxious if everything has to pencil out if everything has to work if you're a no mistake culture if you've got if this is not going to land well then don't do it you can sit on the sidelines at least for a while depending on where you are in the crosshairs of AI.
so that's the first thing every leader should do every leader should have an honest conversation with themselves or someone that they a thought partner that they they trust or more than one to really assess how soon do i think i end up in the crosshair And I think on that thing, and I'll say this as a statement, but I ask the question. To your point about the future and using AI to make big transformative changes in your organization. It does also seem that things are moving so fast right now.
That if you were to like even try to spec what is like the future that I can build with AI and I start the million dollar project tomorrow. you're very likely to build something that's going to be wrong. Is that a fair statement? I think that's a fair statement. I wouldn't do that. I would have obviously a product vision or a product strategy or some idea of what I think I want or where I think I might be going with this. But we have to think about this as a thousand experiments.
inside of a single roadmap because otherwise i think the risks are too great particularly if you're an incumbent right and even for startups and we're seeing that every week now in silicon valley in terms of ai startups right Talk about pivots. They're having to pivot every week based on what OpenAI is doing in terms of changing the model, changing the capabilities, changing the economics of the model, and so on. Back to your pitfall question. It's all about leadership at this moment.
and just having realistic conversations with people around this is what i think is possible here and what these tools can do and here are the first three or five or ten steps we're going to take and we're just going to experiment experiment our way into this both in terms of spend at capital and time and i get it if you're the ceo of a 20 000 person organization 20 bucks a month adds up it's going to be four four million dollars right of incremental tech spend it's not in the budget currently
And so your CIO is saying, oh, okay, I got to pay that. I got to pay that for everybody. What do you want to take out of the roadmap? Because it's $4 million. or you got to grab that money from some other obviously some other source so it's real money when you scale that up i get that even if you do the discounts and stuff like that start in a meaningful but small way is my opinion
and grow into it or experiment into it. But to your point, it's changing so fast, you can end up in the crosshairs quickly. Yeah, that's right. One thing I really like, Greg, about your class is you. encourage students to reevaluate every three months. I think that's great. It's not just that you run a framework one time and then you take your marching orders. It's
This now needs to be a part of your regular rhythm of review. Where are we in the crosshairs? Where are we in our organizational development? Where are we with the latest advances that have occurred, how do they affect our business model? And if you're not regularly reviewing, you're going to be operating off of an outdated model very quickly. absolutely yeah and i think that's right in terms of
It's limitations, so it's capabilities and limitations. Oh, it's too biased. It's less biased now, and it's getting less biased, right? It hallucinates. It hallucinates less now, right? And it's going to hallucinate less than Q1. So you check back in to capabilities, limitations. business model cost of the cost of queries for example or API costs coming down dramatically on the member sex right?
So yeah, I think that's right. I think this idea of a one-year AI strategy makes no sense. I think you should have a head of AI, by the way. I think it should be a business person, not a tech person. And that's the person responsible for doing this every three months. It seemed to be a golden opportunity for a lot of IT teams to suddenly get back into glory days.
it'll be interesting how many of those itunes teams actually going to grab that one and saying instead of seeing as this necessary evil comparable to let's not give everybody in the office internet as some of us i remember in the 90s or banning facebook remember that right like yeah listen this is this will it'll happen faster this reminds me of what how it reacted to sas right which is
You know, wait a minute, I'm not invented here, I didn't choose that kind of thing, right? And then eventually they lost that battle, so to speak, right? But it took probably a decade, really, right? I think similar, right? Shadow AI, it's already everywhere. Just like Shadow IT, SAS generated Shadow IT, Shadow AI's everywhere already. And it's the high school kids first and then the college kids, and now it's the younger employees.
And they're like, and they know that their work is mind-numbing repetitive. They know it is. By the way, so sorry, this is actually non-trivial. Henrik and I have a mutual friend, Bracken Darrell, who I understand made major inroads when he was at Logitech because in part because of his kids' gaming habits informed some of his big strategy. Greg, you have sons, or you have kids, I think. How have your kids affected your understanding of this and your mindset towards it.
They haven't helped to be honest and I think they started to ignore my tech. by June or July because two of them are working, two older kids working in tech. And Juan Younger just graduated from college and now working actually in sales in New York. And he and I are now having more active dialogue as he's using. AI more in its day-to-day flows
as he should. But my other two, I think, just got sick of me. Bye. They unsubscribed from the family text. Yeah, they unsubscribed from the family text because it was dad, like, posting. Seriously, dad, stop talking about SAML. And they just started shitposting back about AI. I have one question that I'm curious about because obviously you're such an expert in education.
It does seem very obvious that both the texting format, like the going back and forth, but also being able to completely personalize the education for that specific individual. it has the potential to change education quite a bit maybe as we're like finishing off and you're looking into the future could you give us a few cents on what your thought on what this will mean for our education yeah for sure first of all having been someone who's trying to disrupt the business schools
It's harder than it looks, right? And I always remind that of the conversation I had with an associate dean at a top 10 U.S. business school who said, Greg, your product's great. Because we've had people on my team take your courses and your price is too cheap. So I hope you go out of business or run out of capital. These guys have strong brands.
And just culturally, we instill so much value in that brand, whether it be undergrad or business school. So it's harder to disrupt education than you realize, I think. And most edtech entrepreneurs, I think, would agree that it's a tougher sell, or it can be.
I would say, and back to how we talked about what Section doing with AI, we're moving faster now than November 6th, Dev Day at OpenAI for me, and the lowering of the token cost, the release of the GPTs, and the robustness of the model, the improvements of the model.
to me really were a signal that we need to accelerate, meaning I do think that, particularly if you're sitting on what I call dumb video libraries of training, and we're not that because we're a live learning platform but we're not we're next right meaning but if you're certainly an asynchronous learning platform today i just i have to believe
that someone's going to create a much better experience right that is because it's personalized and relevant because it's the number one question we get from our students which is what about my industry what about my country what about my job in terms i'm learning this but
How do you make it contextually relevant? And clearly AI can do that. So I'm optimistic in that respect and we're going to have to move quickly. I think we will to build it. I'm also thinking about it a little bit differently, which is A lot of people don't want to learn, let's be honest. We burn people out on education by, I think, probably by selling them with $40,000 of debt, at least in the U.S.
And, you know, most of us are not lifelong learners. If you do any kind of segmentation analysis of U.S. customers, U.S. consumers, rather, you'll get like a 5 to 10 percent, oh, they're the lifelong learning. segment of the market everybody else has day jobs and then they want to go home and watch netflix and look after the kids and so I'm thinking about this more as, can I reinvent what we are?
building, basically, not as a course, but as a co-pilot, essentially. As an example, I think I heard you say this the other day, Greg, just for listeners to make it very pragmatic. Some people want to learn how to make a product strategy. most people just want a product strategy. Right, exactly. And you can run a course to teach the people, but as you said, maybe 10 to 15% who actually want to learn how to create a product strategy. For the other 85% who just want a product strategy,
That's a fascinating to me evolution of the brand and evolution of the business to say, why don't we just deliver a better product strategy for them? Yeah, that's right. I think that's how I'm starting to think about it. And there's hundreds of tasks like that.
Some as small as how do I do a good one-on-one or how do I do a performance review? And some as infrequent but more strategic like how do I build a product roadmap or product strategy? How do I do business strategy? How do I do competitive analysis? and i think that we can teach people or we can actually sit alongside them and get the output now the question will still be that if ever if all of us are getting good v1 out of AI, then what? So that's the question we have to answer next.
it definitely seems very interesting when you look at like the graph of like where ai can take you there's going to be people that's going to be under that either can be redundant or can be lifted and there's people going to be about but like at one point the graph is going to change quite a bit absolutely absolutely yeah and so
you know we we might all be looking at okay v1s of product strategies in a couple years where no one's really we need to we need some of those to be v2 and to be better and differentiated to actually invest in them
this has been so inspiring not only because you're inspiring but like what you guys have done already i'm sure a lot of people are listening to gonna be able to take a lot from very much appreciate it you're taking the time to talk to us today likewise thank you for inviting me i've enjoyed it it's a great conversation Tell folks how they can find you, follow you, engage with you. Yeah, sure. So follow me on LinkedIn.
and i'm starting to post more frequently i've been unimpressed with the growth in my followers but impressed with the engagement that i'm getting on linkedin so i'm enjoying it uh you can find section at sectionschool.com but yeah find us on sectionschool.com and we'll make it easy to check us out and experience a live learning Of course, it's nothing like LinkedIn Learning. It's nothing like LinkedIn Learning. Greg, thank you as always.
Look forward to continuing the conversation. I appreciate it. Thank you. Best of luck with everything. Thanks, guys. Okay, Jeremy, give me your first impression after we had this conversation. What was the thing that you remember? So, first thing that comes to my mind when I reflect on Greg's conversation is the rhythm of acknowledging explicitly with the team when they used AI. I thought that's great doing AI shout outs.
Just like you'd shout out a human being, shout out when someone uses AI. I think it's a great kind of mechanism to normalize. And then the other, he described it as an early light bulb moment. Finding an infrequent but high value use of AI like reviewing the board deck I think is a magical way not only of demonstrating value high up the food chain, but also for the CEO to be
open about the way that AI can affect anybody's job and the way it can amplify anybody's job. A couple of things. What about you, Henrik? What'd you take away? Two things obviously. hard not to fall in love with can be testosterone for your brain which is like just a funny thing for the middle-aged man but but i do think that this idea of seeing it as not something that's replacing you
Or something that you're unnecessarily tasking. But something that is becoming an Iron Man suit. Or becoming something that you can use to make yourself better. I thought it was interesting. The second thing which I think we've heard before. But it's just.
I think important is that most of the people that seem to be knowing how to use this and are using it a lot they keep talking about it as a thought partner not necessarily as like a somebody you send tasks to and so yeah i remember you said the other day like when you come out for a meeting you have a great idea you basically ramble into the chat gpt app and then it basically is structured
the thinking back to you i have exactly the same thing i like to use a lot of words i tend to be a little bit philosophical in the way that i compute and say things And so being able to just get the bullet points back from something that you just vomited into your phone has been very useful. And I think he seemed very much to be of that kind of... a school of thought also see that's something that you're having a conversation with when the paper is empty
Yeah, and then the last thing probably is the regularity of review. Don't assume that an AI strategy is set for long, but actually have One of the things that I've done because of Greg's influence is I've put calendar alerts on my calendar three months out, six months out, nine months out to revisit my own AI strategy. And I think it's a really good practical thing is recognize the world is changing so fast. What you think about it should probably be regularly updated deliberately.
That's all for this episode of Beyond the Prompt. But hey, before you go, would you do us a quick favor? Would you hit subscribe? We've got a bunch of amazing advice coming your way and we don't want you to miss any of it. We'd be grateful if you'd like and share this episode with someone you know who's also curious about how to add AI to their life and their organization. Until next time, take care.