¶ Intro / Opening
What does it look like to build a company on top of AI? We serve 92% of the Fortune 500. Nate Gonzalez, head of business products at OpenAI. And like, what kind of traits do you think... you know, the open AI PMs that you hire have or what countries do you look for?
Entrepreneurialism, do they have just very high grit and determination and are willing to work really hard problems? And there's this quote that I love, never underestimate the power of what you can get done or the amounts that you can get done in 10 minutes worth of time. And it's really just stuck in my brain of...
What can I do with the model to make myself go faster and get these things done and just keep progressing this forward? You're like at the top of my percent of using AI to improve your job, right? You're probably your job. So like maybe you can name like three of your favorite AI workflows that you personally use. a lot okay welcome everyone i'm really excited to have here with me nate gonzalez head of business products
at OpenAI, and over 92% of Fortune 500 companies already use ChatGPT Enterprise. So super excited to talk to Nate about how OpenAI builds products, what traits he looks for in PMs, and how he uses ChatGPT personally to save time. Welcome, Nate. Hey, thank you, Peter. I really appreciate you having me. So excited to dig into this. another thing just like framing lies like yes 92 percent of fortune 500s leverage our enterprise product we also have
millions of smaller companies and enterprises and mid-market companies that leverage our team product as well. So there's both this, like an interesting motion that we have that's a mix of self-serve. And our fully sales managed deploy process. So with larger, more complex customers we have, our sales team is working directly with them to go get value out of the product. So we cover both ends of that spectrum. That's awesome.
And I'm sure there's also millions of employees just using ChatGPT at work, just like their personal accounts. There is plenty of that as well. Yes. Okay. So you just had some big launch this week, right? So you launched connectors and record mode. Maybe you can briefly talk about what these features are. Yeah, exactly. So, I mean, the...
¶ OpenAI's latest features for ChatGPT at work
the core so first let me talk about what it is we can kind of get into the genesis of it is if that's interesting um just to start out what we launched is the ability for companies to connect to their internal knowledge sources And so tangibly what that means is let's take something like Google Drive or SharePoint. Either at the company level, kind of at a service account or at an individual level, say OAuth, the user is able to connect.
directly to that underlying data source we respect the permissions that live inside of the organization and then the model isn't able to pull and read from that data source so that if i ask a question that would really benefit from ChatGPT having knowledge of something that's going inside of my company, I'm able to get that information back.
so in addition to all the pre-trained data that chat bt leverages and the ability to search the public web so you kind of get both like the large fact base of human history with recency from the web now you're adding in the private knowledge side. So that's kind of the large part around connectors. Two pieces of that, we've launched four different connectors to the biggest internal knowledge sources, sorry, knowledge stores. So your...
Your G drive, your SharePoint, Box, Dropbox available in Chachaput directly. And then I think about, I think it was 12-ish connectors in deep research so that you can both combine. the ability for Deep Research to go search the web with the number one requested feature from our, or feature enhancement from our users of Deep Research, which is like, this is awesome. I now want to be able to search over my internal data as well.
So I'll pause there and then we can see if there's any faults and we can get into record mode. Yeah, maybe you can talk about, I mean, launching 13 connectors, you know, is a lot, right? Like maybe talk about how this process went from like kind of... ideation to launch. Yeah, sure thing. And how you can build this. Yeah, so the ideation on this started a while ago. It's this, I mean, it's not a...
It's not an unreasonable thing to think out of the gate that, okay, the models all get much smarter with more context, right? And so to be then contextually relevant for your business, being able to access... data sources and have knowledge of that within your business is really important. And even more so is a foundational building block for us to get agents to a place where they have very high accuracy and fidelity in the actions that they can go take. So be those.
read actions where you're asking an agent to go do a whole bunch of things for you. ChatGPT is a good example of kind of that agentic capability to go do deep research, to go pull internal monitoring and bring that back. or the ability to eventually start to take right actions with agents where now they can go execute on your behalf. So operator is a good example of this type of capability, both of which benefit from this context where it's, you know, now.
i'm asking a question i'm directing the model to go do something in the model that has the internal context to operate it's that so that's that was the general idea in fragment now when this came about this was prior to the launch of reasoning models. And I think it was what, November-ish, November of last year. And so what we really had was like the 4.0 paradigm, which is very much based on...
you know, the very quick call response where a lot of, you know, pre-trained and post-trained knowledge exists that the model is calling on. And so that was how... we were initially thinking through the connectors. And so it was like, great, okay, we really need to think about how are you syncing and indexing repositories. It becomes a really important aspect so that you can drive against that latency.
and make sure you have a really low latency, really high quality experience. The interesting thing in the shift with the reasoning models is that latency constraint. uh is relaxed to a degree because the model has multiple different turns to go get you the right answer before it brings back that information so you know reasoning generally works as you ask a question the model goes out it forms a hypothesis it
looks at several different variants of that hypothesis in tandem and pulls them all back together. And so this is how we evolved to be able to scale more quickly on the connectors in addition to adopting NCP. Pause there. See if that gives you a broad context on how we got to where we are. Yeah, that's super helpful. I mean, I think...
Every company has a wide range of internal knowledge databases. But I think the quality of some of this data could be pretty questionable. Some of this stuff is out of date. So there's probably a lot of thought that goes into if I ask a question, which files does it pull? of the files is that yeah yeah exactly so we we did a lot of post training on that specifically you know leveraging our own data leveraging synthetic data uh where we look at that notion of recency uh
And also, you know, kind of say seniority of authorship. So the notion of social graph in there as well, to be able to like surface the most relevant content. So like, just think about somebody who's new and who's onboarding onto the product directly.
or sorry, onboarding into, say, OpenAI, they don't have any deeper context on what's going on. There might be 100 documents that have been written about a specific subject. So we spent a ton of time working to make sure that the most relevant document. documents are being pulled forward by the model. And you're not just getting a generalized return of here's the 30 different documents written on a topic, like go figure it out. Got it.
And, uh, yeah, I think, I mean, I, I didn't play with myself, but I saw the video and I think it's amazing. It unlocks so much. You should. I mean, you should tell us what you think. We'd love the feedback. But for us, we've been playing with a lot internally. I mean, for months as we've been in beta and dog food and just really trying to drive the experience.
We use it every day, not just like, oh, how do we like dog through the experience to drive our own workflows? And that's, I think, the really important thing for us internally. How do you, you know, like I think there's this book called Thinking Fast and Slow from...
Daniel Kahneman, and I think maybe it corresponds to the reasoning model versus 4.0. How do you decide what to use with this product or when do you want to think fast and slow? That decision boundary... uh does exist right and to some of you this is like post training on understanding the intent of the user question so that we can then kind of figure out okay great what's the depth of searching that we need to go do to be able to get the right answer to this question
And so like the generalized paradigm is right. Like 4.0 is thinking fast. It's got a whole bunch of knowledge. If you ask me a question on a topic that I know, I can give you a really quick summary answer. If you ask me a question on, you know, a deeper question like, hey, do a really deep dive analysis on this industry, I probably want to think for a minute, structure that and bring it back to you. And so that's a natural decision boundary that exists right now that we...
push into the Connectors product and the Connectors experiences. Got it. And like, you know, some of the most relevant recent information in your company is just like the meetings that happen, right? And the notes that you take during meetings. So maybe we can talk about Record Mode a little bit.
So record mode is, you know, you could think about them as two distinct products. But part of the reason that we launch these and we talk about these together is it actually kind of rounds at that knowledge picture. So we go back to what I talked about earlier. You have the pre-trained knowledge that the model has.
then you post train against for specific use cases to make sure that you're really good at answering those questions. That's the fast thinking paradigm. You have the reasoning paradigm that allows you to then go deeper, which benefits from public search. also benefits from now connectors to your internal knowledge stores. And that's all the information that's written down. And so the big other source of information to kind of round out the picture of what does institutional knowledge look like?
What we are doing right now, what is actually spoken in meetings that doesn't necessarily get recorded, doesn't necessarily get, you know. even transcribed accurately. So you have shorthand notes that you're pushing out to colleagues and you're trying to define action items and the like. And so what we've done is, yes, there are many companies that have built an AI version of recording meetings.
There's a ton of fun stuff to go into there that we're going to keep driving into to make that much more feature rich. and meetings much more. First class, now we think about this. But we also wanted to have just a generalized capability for anybody to record because the purpose here is actually to take that information and model it just like internal knowledge that you might have with, say, Google Drive.
And so then in the future, I can recall from that. I can actually say, great, what did Peter and I talk about two weeks ago? And then I can pull back that summary. And then we timestamp that summary. So the individual action items that would be listed, I can actually go to the underlying transcript from there. And so I'm able to get the Richard context to the degree that I need to kind of traverse between those layers.
yeah i mean i i think there's an advantage here because like you know meeting notes is just like another knowledge source that a company has so if you have just like a pure play ai meeting product it's not it's not as comprehensive as like what this is right it's like another part of knowledge you can get. Yeah. That's right. And how did you guys, let me go a little deeper, like, you know, evaluations are really important for these AI products. So how do you guys decide?
whatever this thing is good enough to launch or like, well, what kind of stuff do you look at to determine how good the meeting notes are? Yeah. So the evaluations are really important and we need to set that initial, like, you know, it's like any metric that you've established. yeah first step to align on what the right north star metric is you have to then be able to measure it get a baseline and start to be able to hill climb quality quality wise against that baseline
And so that is the process that's in play there. In terms of then assessing what that bar looks like, a lot of that is working both internally in our beta process where we are testing this product every single day. We use this internally all the time. So there's very much this notion of like, great, what does the eval look like? Does this actually work in practice as we go play with it? And so there's the combination of those two elements and then also external.
alpha and beta testing that we've done with customers as well, where we take that feedback or understanding if customers are actually getting to value. Because there is a version of the eval world where You want to get to perfect before you launch, and I think that's not the general ethos of what we go for. We want to get to, here's a really high-quality bar that we set, but we want to get as quickly as possible to user signal because that's really where...
You know, your evals matter. And in a very deep sense is, you know, from when we get into product land, making sure that users like yourself and others are getting the value out of the experience. and is it kind of like just looking at stuff like you know like getting the user to tell you like how accurate this transcription is how useful it is like that kind of stuff yeah try to track it yeah got it yeah i mean my theory of evals it's it's there's
Yeah. Accuracy, the transcription that we look for, there's different ways and signals that we do that. So like we will get the qualitative feedback directly from users. We're able to, you know, kind of collect whether or not it was a good or bad summary and the like, which starts to be able to kind of drive quantitatively.
There's other signals that we look for in terms of how people think about follow-up questions and the like, whether or not we're actually being clear or returning the right information. Got it, got it. Yeah, I think my theory is that Evolve just requires a lot of manual involvement from a lot of people. Just get the feedback loop going. I think initially, yes. And I think it's a process that generally the industry is working to get, you know.
much better actually just realize how important it is to actually build good products in this space given the fact that we need ai to be not just useful, but now much more than just reliable, particularly in a work context where the focus is great. If I need AI to drive workflows inside of my company, I need to make sure that it is reliable and accurate.
Yeah, you can't have it hallucinating a bunch of baby nose. That's not good. Yeah. Okay. And so let's kind of switch gears a little bit. I want to talk about like the product or get opening a little bit more. I found out, I think from Kevin's interview with Len and Annie, that there's actually like 30 PMs or fewer at OPI. So I'm just curious, why is the PM team so lean? I think there's like 500 people now, right? Yeah.
¶ Why OpenAI has less than 30 PMs for 5,000 employees
There's a couple of things at play here. I would say the most important one is we very much want to be the model of what does it look like to build a company on top of AI. And that means how do we extend every single one of our employees? And so, yes, our PM team is lean. Like our engineering team, relatively speaking, too, if you kind of think about like the size and scope of the business, also relatively lean.
And that's because we are leaning into every single day, what can we do better, faster working with the models directly? And so it allows... you know, myself, my teammates teach to extend our own capabilities as PMs to be able to cover more ground. Got it. So basically like every PM has AI co-pilot, chat GPT, or maybe even agents.
helping them do the work, right? That's kind of the idea. And what kind of traits do you think the open AI PMs that you hire have, or what kind of traits do you look for? Yeah, I mean, there's a couple that we look for.
¶ What traits OpenAI looks for when hiring PMs
pretty consistently here. It's entrepreneurialism. Do they have just very high grit and determination and are willing to work really hard problems? I'd say most importantly is product sense. So the ability to deeply understand user needs. uh creatively brainstorm be able to justify solutions uh and then kind of balance between both the user level of you know fit for purpose the product that you're building to the problem statement that you crafted
as well with business considerations. How are we thinking through what this actually means for our ability to serve consistently? And then also like always that rubric of like, how do you tie it back to the impact? How do we tie it back to the mission? And so that is the thing that we want PMs to be principally focused on. We also screen very heavily for execution capabilities. And then too, there's this kind of like, we want people who are very curious.
Yes, it's great if they've got prior experience or deep experience in ML. Also great if they are just super high user empathy and able to dive in and understand. what's happening in the ecosystem, how it's evolving, how they use it to better solve user problems.
yeah maybe you want people who are like are i mean personally i'm always looking to like use ai to be more lazy or to save time so you want people to who's trying to integrate ai into their workflows right yeah which i think maybe is i wouldn't frame it as lazy as much as What can I go get done faster? I'll just give you an example of how I like to use AI and some interesting framings there. One element is...
Internal research. So I'm trying to get up to speed on what's happening on a specific project within our research organization or technical implementation of a specific system within our engineering teams. this is now something that i can get up to speed on much much faster without endless amount of meetings to sit down and understand you're kind of pulling time from those other teams so that's another element that lets us all run faster is you're able to onboard your own context not
onboarding to a new company but at the space or sorry at the pace that the industry is evolving being able to consistently almost like re-onboard yourself to oh great here's the next topic how do i understand that how would i might now apply this to actual product work In the other part is, um, I mean, there's this quote that I loved, uh, from this is, you know, probably corny, but the acquired podcast, uh, on the Ikea one where.
¶ The 10-minute AI hack that changed how Nate works
At one point in time, the IKEA founder has said something like, never underestimate the power of what you can get done or the amount that you can get done in 10 minutes worth of time. And it's really just stuck in my brain of...
anytime I'm like, okay, it's 10 minutes before a meeting, like take a breath, you know, grab a sack or whatever. It's like, what can you get done in this 10 minute period? Like I've got this laundry list of things that I need to do every single day. I've got that punch list.
what can I do with the model to make myself go faster and get these things done and just keep progressing this forward? So I think it's like, it's less lazy. It's much more focused. How do I become as productive as possible at leveraging models?
Yeah, I usually like try to get as far as possible with AI, you know, working on a strategy document or doing research. And then only when I get stuck to it, I kind of go consult my teammates and then like kind of start again that way. So you can get quite far. Yeah.
Um, speaking of, speaking of researchers, I mean, um, uh, like you probably work very close to the researchers and like, how, how do you, like, how do you plan out roadmaps when you don't really know what's coming? Or maybe you do like a couple of months down the line on the model side. yeah a couple ways there's i mean one it's you take kind of the traditional
value-based product operating model of like, you know, three in a box with, you know, PM design engineering and obviously our data science colleagues. Like we all kind of scrum together in a pod as you might expect. And I think the additional part here is the research element.
So we do work extremely closely with the research teams to understand what underlying research is happening, the relative maturation of that. We are working upstream with them to develop these products that we are putting out. There's kind of a stage process. As you think about research that is more nascent, it's having knowledge of that. So we might think about what kind of product experiences could we craft there?
Again, those ideas can come from multiple different places. They will come from PMs. They will come from engineers. They will come from individual researchers. For what type of product capabilities are emergent, what could you now go build to kind of further drive the impact of our mission forward? So that's, there's a lot of bottom up.
kind of ideas that come out of that, but then what we want to do is drive into like our product road mapping process. So that's, I'd say kind of the main way that we end up working there. Yeah, it's all possible. And how far out do you guys plan out the program? Maybe on the enterprise side, you plan out a little bit further, but like...
This stuff is changing every three or six months, right? How far do you go on the roadmap side? I mean, we run a quarterly planning process. The reality of that is... You know, as soon as you wrap the plan, it's out of date and you're really using it as a tradeoff framework. And so here are the things that we believe are most important to drive forward.
how we are getting and delivering value to users, both obviously on the consumer side, also on the business side. Now, based on that, what's highest impact? And so as new things roll out, constantly assessing that list. Got it. And how do you balance between like, you know, I think one of my pet peeves is like a product org that's to spend a lot of time on internal planning and like internal reviews and they don't.
talk to customers i don't i don't think how do you what kind of advice do you give your team on on that and find the right balance we try to minimize the former as much as possible the actual process around the road mapping Because again, it's such a fluid, constant process of iteration and development. It's not, oh, we have no idea what we might go do next quarter. There are so many things that we could go do.
that we have pretty high conviction in how do we actually focus on the right ones that have the highest impact so that then necessitates the latter part of what you just said We've been talking and engaging with customers. Do we deeply understand the analytics around our products and where they are good or where they are deficient? And we need to really lean in to make sure that we're getting the right level of impact out of this.
yeah i think especially in enterprise i maybe just talk to like five or ten customers and then you probably like that's kind of how you build process right they probably all complain about the same same things or like probably we talk to enterprise customers i would say there's like you know i probably end up having
four or five conversations directly a week. And there are, you know, some of it is like letting folks know what we're doing, what we're planning, getting their feedback very directly. And so, yes, then you still like very specific themes start to emerge.
uh and and so you you do end up getting that like there's that direct line to customers that both you know our product team has but then also we are in very close partnership with our go-to-market team who are date every single day sitting with customers
We have a bunch of feedback loops to get that information back into your product. And so then you have kind of, you know, multiple different sources that we're considering when we're going into planning when we're building, which are direct user feedback, you know, either, you know. quantitative research, and then the feedback that's coming in from go-to-market. So we have a bunch of different data and signals that we use.
And can you give an example of, you mentioned how ideas sometimes come from the bottom up. Do you have an example of that from your team or anyone else? Yeah. Again, just a good example here. This was the Canvas project. I believe the research team pitched the Canvas idea in the first month of the company.
uh yeah i think it was you know somewhere around like july 4th break something like that you know her manager agreed immediately staffed five to six engineers and like that team just formed it was like they're kind of like terraforming really quick there's like
hey there's a really interesting idea we think it's super high leverage these are the reasons why who's up for working like who wants to go join and like drive this forward yeah it's not like conscription like we need to go pull these people it's a lot of people like i want to go work on that problem uh and being able to kind of gravitate to like here's a really important problem let's go dive in and solve that now uh and so then canvas became kind of our first major
UI update to what had been the initial chat to PT release. And so you go from a place where you have just very basic chat interface to a much more rich experience that has a whole bunch of different applications, both now and future. solely from, you know, a single individual who was, you know, this was not part of her product roadmap or over a specific remit necessarily. It was, you know.
the best idea wins and that's what we we drive for kind of doesn't really matter where it comes from within the company got it got it so it's kind of like it's all about like being high agency like taking the initiative making things happen to yourself right yeah and like
yeah and i guess in the interview process you're probably not looking for some like like you're probably not looking for like some 10 year fan pn we want people who are like you know have seen the failures and have the grit you know yeah yeah got it yeah exactly yeah got it okay um and and like what what is that i mean the i mean the company moves so fast man like just from the outside i just see you guys shipping like every other week it's it's pretty wild and but but like you know what
How do you balance that speed with some other stuff? Like what was kind of like the biggest misconception about working at OpenAI? Sure. I mean, I think one big misconception.
¶ The most surprising thing about working at OpenAI
that I like is that moving quickly involves cutting quirks, particularly around, say, safety. I think the thing to be 100% grounded on is our team principally is deeply mission-oriented. So when we go back to several questions you've had about how do we think about what a successful PM looks like at OpenAI?
The successful PM and OpenAI is 100% focused on what impact am I coming into Drive? And the reason I'm here is because I want to go drive that impact. So that question is front of mind as people are thinking through what we are delivering, what we are shipping samples. pushes the urgency to ship quickly, but also to ship responsibly. So that when we, you know, there are a whole bunch of things that we pause on.
that are not ready to go out because we feel like okay there's actually a bunch of like safety eval work that we need to make sure that we are hitting the bar against before this goes so we will hold on so i think the interesting thing to like think about is like There's a whole bunch more that we could just ship there. But we are actually kind of holding that bar very deliberately to make sure that this is of the quality that is necessary. But again, I think there's a culture around both.
urgency, how we use our tools, how we can go drive that impact that drives that underlying speed and execution velocity. And so those things you could think about as being opposing constructs, but in reality are not. It's good to have that tension, right? To like, I want to have the urgency, but at the same time, you want to upload the quality bar. And yeah. Absolutely. And like, you know, just building AI products myself, like a lot of times I find like, you know.
uh like this thing is like if only we had a better model then this thing would be much a bit better like it only works like 60 of the time so so let's just wait around a few months until open it ships a better model i don't know if that that's what you guys do
yeah yeah does that happen in turn or two is like if only we had something better like just pause until the model no no i i think it's we have line of sight or like belief or conviction on what the better model is that would be coming so let's understand that and understand now what's the right first step to take on the pathway with what we have right now. Okay. Or drive even more focus and urgency for how we accelerate our efforts to be able to get signal.
into that model so we can kind of develop it to a better place where it interplays really well with our products got it got it and let me just kind of ask a quick question about impact like when you say impact isn't mostly about like business impact metric impact or is like there's multiple ways to measure impact here i mean the main way that we think about measuring impact is you know on the like it's how many how many customers are we serving right both the consumer side
business side you know within within organizations how many customers have gone wall to wall with us are we kind of blanket defaults useful tool inside of businesses like Utility is the really important thing for us to understand inside of businesses because I think going back to the mission, the reason that businesses are critical for us is because that's where most of the very valuable economic work globally happens.
Right. Yes. It's like that is the scale vector for how you drive global impact, global change. And being able to make sure that we are working directly with businesses to help them transform their own industries, you know, get to drug discovery faster. You know, elements like that, those are just, that has them.
just global impact that is super important for us. And so that's, that's kind of the, the, you know, high level way of how we think about impact. That's true. Yeah. If, you know, chat CP enterprise will seize, it'll make the whole economy more productive, you know, hopefully. Yeah. And so let's talk about that a little bit more. So what are some barriers to getting employees to use AI more in companies and what tactics have been useful in kind of overcoming some of them? Yeah.
¶ The biggest barriers to AI adoption and how to overcome them
If you rewind to, I'd say, even last year, I think that the general... The vibe that was going on was there was a ton of experimentation within companies. A whole bunch of AI products had just launched. It wasn't immediately clear to folks who weren't playing with these every single day what and how to apply these within the context of their business.
And really what we've seen in, you know, going into 2025 is the shift into actual full deployment of these tools. A lot of the where the value driving use case is coming from. How do we find those? How do we lean into those? How do we focus our organization around those? That's, I think, been the big shift that we've started to see. And I have one trend that's driving that shift is the internal rise of AI champions and not...
you know, okay, let's get AI to everybody. It's, this is how I can help transform my business leveraging AI. And OpenAI, we want, and we're going to partner with you to go make that happen. So then this becomes the combination of the product work that my team drives against combined with the go-to-market work where we are sitting jointly with customers to understand that.
define that and make sure that they're getting kind of the right level of deployment to drive there. So specific examples are, you know, companies like Fanatics. uh moderna morgan stanley where like on the morgan stanley side you they're leveraging a lot of our work that's now embedded into their wealth uh management services and so they were like cool here's the problem that we think is really really interesting for us
to go solve that's very high leverage. We will do a lot of things with you, but we're going to focus most of our attention on this high leverage use case. Go crack that and we'll then move on to the next one.
Got it. And these internal champions aren't necessarily like the CEO or the execs. It can just be some IC or some employee, right? We'll have, yeah, exactly. There'll be heads of AI. There'll be... uh sometimes heads of different product divisions often like cios of companies who want to drive transformational change there's it's a mix of different champions but it and i think that's the interesting thing too is it's not uniform not just similar to the way that um
we talked about with OpenAI, it can be a little bit more bottoms up of there is an exec who feels passionately about what we can go drive and make a change. they're banging the table inside of their company and we are working with them directly to be able to map to that
Got it. Okay. So, so if I was like some employee or like some executive company and like, I'm like, Hey, like I really want my employees to adopt AI as soon as possible. Like, how do I, like, do you have like some steps that I should go through to. Yeah. I mean, a lot of it is like.
Okay. Let's, let's actually define, you know, broad adoption. Yes. Great. But to what end? Right. We like go drive broad adoption. There's a whole bunch that we can do about like getting people to understand, getting to initial value. There's a lot of product work we can do too. to drive people to value and understand like, how are you, if you're a data scientist versus an engineer, how do we think about the in-product work that we do to drive toward value quicker?
All right. Like understanding your use case, what jobs are you trying to get done in there for? How do we then curate the experience for you? It's like, that's a big area that we dive into on the product side, which answers kind of a general question. But I think the specific question that you asked isn't great. What use cases inside of your business or what business processes are highest leverage to focus on? And then wrapping back to.
With the tools that we have and that we are building out at the speed that we're advancing, what are the one or two bets that we're going to go take internally and dig in on how we then graft AI into those workflows? Because...
That's, again, these one or two vectors of initial change that drive outsized value for those organizations. So I'd say the general advice, if I recap that, is... broad deployment, get this in the hands of every single employee so they get the familiarity because you want them to be fluent in these tools because that's actually how you drive some of this bottom-up culture that we have in OpenAI into any business.
In addition, you also want to be able to find the couple of use cases that you think are just going to drive outsized value for workflows that you have inside of your company today. And then focus on those. And that's where a ton of our go-to-market attention is then specifically focused. And I guess some of these use cases are kind of what you prioritize, right? Like, you know, looking up internal sources, streamlining meetings, like this kind of stuff, right?
Is that, yeah. I would say no, not necessarily. I think those are all tools to an end. It's more, I have, like, let's go back to the Morgan and Stanley example, right? It's, we have a wealth management product. We think that we have an internal metrics for what success looks like there. We have internal aspirations for how big we could grow that market to be. Now, based on that.
How do we now structure a product experience that we think is going to be better? How do we now evaluate that in drive progress against that specific goal? And so there's something like connector, something like record mode are then. building blocks, elements that help you drive that outcome. Like these are tools to drive an outcome. It's the purpose is not just tools for tools sake. It's making sure that you can identify the highest leverage.
product opportunity you have within your own business and then be able to go push this forward. And have you seen employees of these companies build their own like custom GPTs or whatever workflows that like... get a broad option kind of like, you know, from the bottoms up, right? Yeah, exactly. So Moderna is a really good example of that. I mean, I'm trying to remember it's...
Thousands of GPTs, if I'm not mistaken, that have been deployed internally. And again, it really speaks to that bottoms-up adoption culture, the ability to create a GPT. share those gpts with their colleagues so you know they get kind of the collective benefit of the knowledge work that they are doing being exposed and extended through gpts
So I'd say that, you know, a really common pattern that we found. And that's another one where that was driven by an internal champion within Moderna. This, you know, and then pushed down, you know, you kind of get this like, hey, here's an opportunity for you all to go build.
And then that's like many different flowers bloom within the company. Yeah, I mean, I think that's one thing that I think OpenAI really has, like being able to share these GP. Like I have all these like great projects that I want to share with my colleagues. But some of the other providers don't actually share it easily. So it's such a pain in the ass. The GPTs make it so easy. Yeah. They certainly do outcode a bit. OK, so I want to...
Talk about, I mean, you're like at the top 1% of using AI to improve your job, right? You're probably your job. So like maybe you can name like three of your favorite AI workflows that you personally use to save time at work. Yeah, there's the one I mentioned of... doing the internal research to get myself smarter, more fluent up-to-date. Adjacent to that is external research. I'm going to be chatting with a customer. We serve...
92 percent of the fortune 500 if you've got to think if they like cross the industries that that spans being able to great how do i understand their company, their context in a way that while I understand our products and our product context, we can start to find that mapping so we can get to value much more quickly. And then I think there's the other, there's like the day-to-day side of this. I've got all just the productivity side, which is not just...
Yes, there's drafting emails and memos and Slack messages and trying to make sure that I'm just cranking through those processes much faster. But there's also a lot of internal data analysis that I'm able to do with the tool directly. The ability to understand code.
uh that i'm seeing from engineers uh and you know extend my own skill set so those are the areas that i really focus on you're like uh looking at prs and stuff yeah uh at least i can understand like great yep let's uh let's understand what went through there yeah i'll tell you personally i i use this stuff too like you know
I don't know if you do this, but I go to a conference room and I start talking to ChatGPT or AI. I'm like, hey, I got this feedback on my document. What do you think? Here's some context. And then I kind of work with it back and forth. there's that i think i i would say like one thing that i do with voice mode specifically uh similar but it is a lot of role play so great um
¶ Using ChatGPT roleplay to prep for important meetings
you know i there's going to be a specific you know like candidate that i'm really keen on that i want to be able to talk to like let's let's think through the question and answer or there's a customer interaction that comes that's coming up that's really critical like let's work through that
There's a podcast with a guy named Peter that we're going to be going to play that role and let's work through how we go together. So there's a whole bunch of work that you can do to start to have the model assume another personality. can start to hone your own craft and skills and get really crisp on your messaging.
Okay, so you basically set up a project or something, and then you upload a bunch of context so that it can be like Peter or whoever you're talking to? You can do that. Yes, you're uploading the context again. This is like where that ability to like, if you kind of even go back to the top of the conversation, be able to pull in things like connectors, pull in things like, you know, record mode of past meetings that we have the summaries against into a share, into a project.
I mean, like, great. I want now voice mode to be able to help me understand, you know, has that background context. So then now it's able to kind of like play this role or be a thought partner. I will often have it critique. you know something that i've either written or that i'm meant to speak about you know got it here's here's my initial draft like what am i missing
Or where could this be stronger? What are the weakest parts of this argument? So you get non-drafting to the actual quality of the output. Yeah, that's exactly what I do with it. I have like a coach project. And I do this specifically in ChatGPT because ChatGPT has memory. So like I have, I tell you about my life and then, you know, every quarter I check in with it, like get some advice. It's more patient than my wife is, you know? So it's pretty nice. Yeah.
it's good okay so let's wrap up by talking about um the future of producting man like like um we're clearly seeing this trend where maybe ai is going from like co-pilot as we have thought partner stuff to you know at some point you can like
dedicate tasks to AI and then you can go watch Netflix for an hour and then come back and then it'll finish the task, right? So, so like, so yeah, so like, how, how is it just going to change the PM role? You think every PM will have a multiple AI agents that I manage or how is this going to?
The goal for us is extending everybody's productivity. And so that's a lot of what we talked about of like, what do PMs do inside of open AI today? And so it was really thinking about like Chachapiti is this extension.
¶ ChatGPT's future: From assistant to trusted coworker
of yourself of the team as like a virtual co-worker so you can imagine like waking up in the morning you sit down and chat to bt more of a personalized interface list of list of interesting like or like tasks that have come in
And then you being able to sit down and being able to say, great, these are the ones that I want to delegate out to you. Bring them back to me when they're done. I'm going to take one through three on this list and have the model be working against those others. So now behind the scenes.
That could mean orchestration with a lot of different agents. But I think focusing the interaction with ChatGPT kind of employee by employee becomes really beneficial to just reduce cognitive load and really have the model do a lot of that. okay, now I know I need to go deep research, use deep research as a tool. Now I need to go use operator as a tool. And so there's a lot of interesting ideas in there. Got it. Okay, so it's almost like a...
It's like a hub where you have some interns working for you, and then they kind of synthesize information for you and do some of the groundwork for you. Interns who can do PhD-level math. and deeply understand any code that's been written. And, you know, so I'd say more than the intern, right? I think it like, it felt probably much more like intern, you know, 22, three parts of 24. I think as we've gotten into reasoning models.
and just better u ui paradigms and ux paradigms with those reasoning models i'd say it's much more of like not just a like a like a an intern, but actually a co-worker that you're working with that you trust to go get work done. Got it. Got it. Okay. That's very exciting. Yeah, I can see the UI evolving beyond just like a simple, you know, what do you want to chat about today interface. So very excited for that.
So last question, you know, I think there's like some angst about like, you know, are we all going to have jobs, white collar jobs in a year or so? Like for people who want to level up their AI skills or maybe join OpenAI, like...
¶ The specific skill that will keep your job safe in the AI eraGet the takeaways: https://creatoreconomy.so/p/how-openais-head-of-business-products-uses-chatgpt-at-work-nate-gonzalezWhere to find Nate:
What's your advice? Like a lot of advice is just like go try the tools, right? But like maybe you have something more specific. Yeah. It's not just go try the tools. It's how do you make these tools an extension of the way that you do work?
Like that's the important thing. It's not just, okay, great. I kind of know these things I'll do. It's how am I using these tools every single day to extend my own capabilities? Like that's the really important thing. It's again, it's not how do I write emails faster?
Both, how do I write emails faster? How do I write better emails? How do I actually think about the content of the work that I am doing and improving the quality there? And that gets into back to kind of the fluency of understanding, okay, you know. If I ask it to write me a draft email, okay. If I ask it to critique this and point out weak points in the argument, point out key assumptions or logical fallacies that it might find, it actually allows me to improve.
and those are the loops that you want to find not just productivity loops but actual like quality improvement loops in your own thinking your own process and your own output got it Okay, so it's like PM1-1 is like a start of the customer. Like, what's your own problems? Like, what do you spend your week on? And what can AI help on? You know, that kind of stuff.
And if I'm inspired by this conversation, I want to use ChatGPT officially at work. Where do I go? Yep. Get started really quickly and go to ChatGPT team. And again, we made that easier for a lot of businesses. So SSO we enabled as part of the announcement on Wednesday. So that's kind of the front door. Come in, you'll be able to self-serve, get started really quickly.
And as you want and need much more advanced compliance features that work with the go-to-market teams directly to kind of find and map use cases to value, ChatGPT Enterprise is like the really... strong product there both of them we have um the same privacy guarantees we never train on your data so the data in the workspace for either team or enterprise is you know
Never, never makes it to training pipelines. It's your data. It stays inside of the workplace. You have then a lot of the control, particularly on the enterprise side, a lot of advanced security and compliance controls to make sure that. You're deploying AI both safely and responsibly, you know, relative to your information security, but also the compliance regime that you might be working in for various different industries.
I really love how your team not only... It's not just like, hey, here's access and go use ChatGPT. It's like, what is the specific problem your company has? Let's solve it together. I really love that kind of attitude. That's exactly right. Cool. All right, Nate. Well, thanks so much, man. I learned a lot from this conversation. Good deal. I really appreciate the time. Take care, Peter.