You interviewed at 46 companies. What did you learn about what the market is like, what interviewing is like, what the whole scene is? In terms of the space, there are the product companies, infrastructure companies, and the model companies. I found it helpful to to put companies in each category and figure out which segment you're most excited about to help narrow down the options given that there's so many AI companies right now.
Product companies are the companies building on top of the model. Here I think of Cursor, Codium, Hebbia. Infrastructure companies are the companies building the tools to help AI product companies effectively use LLMs. So whole suite of these. They're the inference providers.
like modal fireworks together, vector database companies like Pinecone, Chromadb, Weviate, eval and observability tools like Braintrust, Arise, Galileo, and a whole other suite of products. And then there's the model companies, which are the base of the ecosystem building. the intelligence you have the big tech companies like google meta building models and then you also have startups like or not startups you have other smaller companies like openai anthropic building models as well
So that's how I kind of think about it. So for me, in trying to be more focused in my search, I decided to focus on model and infrastructure companies because I wanted to keep getting breath in what I was doing. And I felt like the product companies were too similar to my experience at Coda, which was phenomenal.
but I wanted to keep growing. And that definitely the trade-off was that it's a bit more of an uphill battle because the work that I had done was not as relevant to model or infrastructure companies. What is AI engineering and what does it take to get hired as an AI engineer? John Lee Colroyd is a software engineer turned AI engineer with four years of experience but already a very impressive background.
During college, she joined a university incubator where she shipped four mobile apps of production to paying customers. She then entered at Google and Microsoft, joined Coda as a software engineer, became one of the first AI engineers at the company, and then interviewed with 46 different AI startups.
and now works at OpenAI. In today's conversation, we cover Jaume's decision making when she decided to join a startup after graduating when she already had her turn offers from big tech companies like Google and Microsoft. How Janvi became one of the first AI engineers at Coda despite being told no when she first volunteered to join Coda's in-house AI team. What Janvi works on at OpenAI and why she thinks OpenAI moves so fast. And many more topics.
If you're interested in AI engineering or how to make the transition from software engineer to AI engineer, this conversation has lots of relevant tips and observations coming from John V. If you enjoy the show, please subscribe to the podcast on any podcast platform and on YouTube. So, Jambi, welcome to the podcast. Thanks for having me, Gergay. You were in a good college in Dartmouth, but you then got an interest at Google. Now, not everyone working at Dartmouth can get into...
a place that was very competitive. How did you get that internship and then the Microsoft internship? What was the interview process like? And what do you think helped you get your step in the door, basically? Back then, I didn't know anyone at Google or Microsoft, so I applied through their portal, and I remember for university students, they asked you to write essays on why you want to work there. So I remember in those essays...
talking about the things that I had built outside of classes as well as why I wanted to work there in particular. I was lucky to get picked up from the stack, to be honest, and then leak-coded. to prepare for their interviews. So tell me about like preparing for lead code. I mean, these days it is somewhat commonly known, but there's, you know, two types of people, some engineers or college students, they roll their eyes saying, this is, you know.
pointless it's not the job etc and then some people just you know like you sounds like you just kind of like went through it studied it prepared it how did that go so for google that was in my sophomore year and I remember being surprised that I even got an interview in the first place. So I wasn't studying actively before. When I got the interview, I asked a couple friends, what do I study?
I think they sent us a pamphlet of things to look at. And in that pamphlet, there was that green book. Because back then, neat code wasn't a thing. There wasn't blind 75. And so that green book. I'm forgetting what it's called, but I remember buying that book, locking myself in my room for three weeks. Cracking the coding interview, probably. Yes, cracking the coding interview and just reading as many questions as I could back then. Yeah, you have it. Yeah, cracking the coding.
I even have a version of it, which was the white book that was like 10, 10 years ago. But so the author of this was actually a Google interviewer, like I think 15 or so years ago, Gail Lackman McDonald.
and now she she actually sometimes i'm not sure if she still does it but she used to run training programs at companies at uber she came and she ran our training program on how to do coding interviews uh what kind of signals to get how to how to change it so it's actually really nice because she she teaches the companies on how to do it so then she can update the book and and actually have up to date of how it works at different companies
Wow. She definitely changed the game in that I'm not sure how much things were written down before that. So I think she definitely paved the way for NeetCode and other people to build on top of this. Yeah. If you want to build a great product, you have to ship quickly. But how do you know what works? More importantly, how do you avoid shipping things that don't work? The answer, Statsig.
Static is a unified platform for flags, analytics, experiments, and more, combining five plus products into a single platform with a unified set of data. Here's how it works. First, Statsick helps you ship a feature via feature fly or config. Then, it measures how it's working, from alerts and errors, to replays of people using that feature, to measurement of top-line impact.
Then you get your analytics, user account metrics, and dashboards to track your progress over time, all linked to the stuff you ship. Even better, StatSec is incredibly affordable, with a super generous free tier, a starter program with $50,000 of free credits, and custom plans to help you consolidate your existing spend on flags, analytics, or A-B testing tools.
To get started, go to statsig.com slash pragmatic. That is S-T-A-T-S-I-G dot com slash pragmatic. Happy building. This episode is brought to you by Cinch, the customer communications cloud. trusted by thousands of engineering teams around the world. If you've ever added messaging, voice, or email into a product, you know the pain. Flaky delivery and platform stack with middlemen. Cinch is different.
They run their own network with direct carrier connections in over 60 countries. That means faster delivery, higher reliability, and scale that just works. Developers love Cinch for its single API that covers 12 channels, including SMS, WhatsApp, and RCS. Now is the time to pay attention to RCS, rich communication services. It's like SMS, but smarter.
Your brand name, logo, and verified checkmark all inside the native messaging app. Built by Google, now rolling out with Apple and major carriers, RCS is becoming the messaging standard. Cinch is helping teams go live globally. Learn more at cinch.com slash pragmatic. That is S-I-N-C-H dot com slash pragmatic. And so how did your internships go at both Google and Microsoft? Must have been really exciting to like get it. Google was the first one, right? It was a phenomenal experience.
Exciting for a couple reasons. First, it was just a privilege to get access to these code bases of places that I admired. I remember when I was at Google, I was on the search team. And I would use MoMA, their internal tool, to find documents. And so I remember so many weekends where I was just...
trying to find company documentation on how the search algorithm really works or comb through the code beyond the code that I was touching to get a sense of, well, what makes Google tick? So from an intellectual perspective, it was just very exciting back then. Second, you also learn a lot of technical things that you don't get exposure to in college, like how to effectively operate in a large code base, the importance of writing unit tests.
Now, when I look back, it's trivial. But for a college student back then, I remember it was very important learnings that I really value getting back then. And to me, my favorite part. was having access to people that were 5 or 10 years ahead of me in their career. I remember over coffee chats asking many of them, you know, what career advice do you have? What are things that you loved in college that I should do more of?
And some of the advice that I got really paved decisions that I made. So that was my favorite part of the internships. I would say in hindsight that given the big tech... And startups are such different experiences and you learn so much at each. It would be more educational to do.
one startup internship and one big tech internship to get a very robust overview of what both experiences are like very early. So like looking back now that you've done both Google and Microsoft, they were somewhat similar-ish? Is it safe to say? I mean, at the high level, right?
know every company and every team was different yes at a high level what was different is i wanted my junior year to work on operating systems because at that point i'd just taken a computer architecture class and i loved it and so i wanted to go deeper in the stack so from a technical perspective they were very different but from an experience of what do companies
look like and how do they work, which is a huge part of an internship, that part was similar. So what did you work on at Microsoft? Was that OS? Yeah, I was working at OS. Specifically, I was working on the Azure OS team. It was a product. That lets you interact with Azure blobs locally from your file system. So it hydrates and dehydrates those blobs. You can think of it like Dropbox for Azure blobs. Yeah, nice. That is so...
Cool. I mean, both that you decided that you want to do something a lot less conventional, you know, like not the usual SaaS apps or web apps or whatnot, and that you were able to make it happen. Did you express this preference when you got the internship? Yes, I remember.
talking about my computer architecture class where we built up a computer from transistors and conveying how mind blown I was from that experience and how I really wanted to work on operating systems. And then I was lucky that they put me on that team. that's awesome but i think there's a learning here of like you don't ask you don't get so like again just just i just remember when i was running i set up our first internship at uber in amsterdam so for that site and
you know like once we made an offer to the interns like you go through the interview process but i also ask people like if they have preference and most people just do not have preference so there is this interesting thing that if you do express your preference again worst case you know you'll get you'll get whatever it
have been but from the other side a lot of people often don't speak up and and you know the people who are at these companies they really want to try this win-win especially for internships the goal of an internship is to have a great experience and companies would like you to return. It goes both ways, right? They evaluate you, but you also evaluate them. So they will actually do it. It's just a really nice learning. Like, yes, express what you're hoping for and it might just happen. Yeah.
These companies have so much IP and so much that we take for granted today, but are really hard technical problems that they have solved. So it's just a treat to then go work on something that you admire and get to actually see how that code works. Absolutely.
Once you're in there, like these companies are so amazing with how big they are, especially as an intern, a lot of doors are open. You can also just ask and they'll be super happy to do. So then you'd made a very interesting decision because now you were interned at Google, you're interned at Microsoft.
of people would you know be very a lot of students or or new grads would be super happy with just having one as i understand you could have returned to either and then you made the decision to not do that why You know, Google, Microsoft, you love the teams. Tell me about how you thought about the next step of what you would like to do after you graduate.
So I told you how I was having coffee chats at Microsoft, my junior internship with a bunch of mentors, mentioned that startups are a great experience as well. So I started to evaluate the big tech versus startup option. And I don't think it's black and white. I think there are really good reasons to go to both. The way I saw it, the upside of going to big tech was first you learn how to build reliable software for scale. It's very different to build something that works versus...
Build something that works when it's swarmed with millions requests from around the world and Redis happens to be down at the same time. Very different skills. So that was one upside. Different upside. for big tech in general was that you do get to work on more moonshot projects that aren't making money today. They don't have the same existential crisis that startups do. And so they can work on things that, you know.
Great AR, VR research is happening. Back in the day, I think Google was one of the best places if you wanted to do AI research. There are also practical good reasons to go to big tech. I'd get my green card faster. I'd get paid more on average. And the unfortunate reality, I think, is that the role does hold more weight. People are more excited about hiring an L5 Google engineer versus an L5 from a startup, especially if that startup doesn't become very successful.
With that all said, though, I think there are great reasons to go to a startup. And back then, this was hearsay based on what I heard from mentors. But now having worked at a startup for three years, I can confirm it's indeed true. First, you just ship so much code, right? There are more problems than people. And so you get access to these zero to one greenfield problems that you wouldn't necessarily get where at Big Tech maybe where there are more people than problems.
Second is the breadth of skills. And this is not just in the software engineering space. Right from a software engineering space, maybe one quarter you're working on a growth hacking front end feature and the next quarter you're writing Terraform. But even in terms of the non-technical skills, you get an insight into how the business... works and you're expected to PM your own work. So there's so much breath over there.
And you just get more agency in what you work on. You get the opportunity to propose ideas that you think would be impactful for the business and go execute on it. So that breadth and learning opportunity to me was a huge upside that got me very excited about startups.
it's just so nice to hear you summarize this because the reality what a lot of people do is they go to one company or the other either big tech or startup and then they're there for a long time and then one day they might switch but there's a lot of like sunk cost fallacy you know like you're used to this so some people actually after a few years they might go back to the same type of company and so i think there's a few there's relatively few people who see this
with such short and focused time difference to see the different upsides like you have. And as you said, so it sounds like the upsides did happen. So you went to Coda, right? Yes, I did go to Coda. And then how do things go? So you mentioned some of the upsides, I assume, like that all happened there. But what other things? Sounds like things sped up there, actually, from a professional learning and also career experience.
Definitely. I went there for growth and breadth, and I definitely got that in terms of the opportunities that I got to work on. So it was a phenomenal experience. And I'm happy to dive into the specific work I did, but overall, just a phenomenal experience.
but before we do before the podcast we talked a little bit about how you thought about selecting a startup because like you did go to coda but as i understand this was not just uh i'm just gonna you know like oh this looks like a good startup you actually thought about how to select a potentially great startup that that would have that kind of potential growth potential
What is your mental model and how did you evaluate and how did you kind of, you know, like kind of rank the start of that? What was your application process? So back then, I didn't have startup experience. And I also went to a school on the East Coast where not many peers around me were going to startups. So I very much looked for where are places where I love the people.
in terms of them being smart, people I can learn from, as well as being very passionate about the product because I think you do your best work when you are passionate about what you're building. So it was definitely something where I looked for from those two lenses. Today, after having been in Silicon Valley, though, for four years, I have a more robust rubric on what I look for. So that's definitely evolved since then. Because one thing that's become super clear after living here is that...
Your career growth at a startup is very contingent on the startup growing. So then how do you choose which startup is going to grow? And that's a hard question. You know, venture capitalists spend all their time thinking about this. And today, what is your mental model? Or for someone...
who has a few years of experience a bit like yourself, what would you advise for them on how to think about different categories of startups, the kind of risks, the upsides, and so on? They're startups of... all different sizes and the smaller you go the more risk there is i think that's part of the game and that's what makes it exciting because you also have more upside when there's more risk that being said i feel
very strongly that all engineers that take a pay cut to go to a startup should have an informed thesis on why they think that company is going to grow during their tenure and how to actually assess growth. is a hard question with no right answer. But my current rubric is looking for four things. First, high revenue and steep revenue growth rate. Second, a large market where there's room to expand.
Third, loyal, obsessed customers. And then fourth, competition, why this company will win in that space. And I'm happy to go deeper into any of those, but that's at least how I think about... assessing different startups today. And it's all relative because a startup that is pre-PMF will have less revenue than a startup that is Series D 400 people. And then when you're like thinking about these four different things, so.
like we'll later get to your your actual job search as well but do you like try to find these things so for example you mentioned one thing about how customer uh customer obsession right like how much customers love it like let's say you you have a or there's a startup that you're kind of interested in how do you evaluate that do you kind of look it up yourself do you put in the work do you try to somehow outsource it what will work for you
Because there's no right answer here, I think it's really important to do the due diligence yourself because you're going to be the one that's responsible for your decision here, good or bad. How I think about things like customer obsession is I look on Reddit. on YouTube to try to find real users for more SaaS companies where you may not have customers writing about the product online. I'd actually find companies that use that product and then go try to talk to them and understand.
From the ground, what do people think about this product? Especially if this is a product that I can't use myself because it's not for customers, but for businesses instead. I love it. And again, I don't think more enough people do this kind of due diligence and they should, you know, one.
i guess now but a famous example is fast the one click checkout startup where they recruited actually there were some ex-uber folks there who i like new to some extent but a lot of people were recruited with a shiny diagram that showed headcount growth and people most a lot of people did not ask about revenue or
Or when they did, they were okay not hearing about it. And even the people who worked there for a while, they ignored it. And there were some people who actually asked about it and they actually realized that something was off. But just following your framework.
for example some people who are a bit more diligent could have avoided the same thing with customers for example there were not many and like one learning that i i had back then and talking with engineers who worked there and got burnt they all told me i wish i would have done a bit more due diligence and not taken the ceo's word for it but also asked for proof or say same thing with revenue runway those kind
of things. Yeah, I feel like, you know, at startups, we're paid in equity, a large chunk. And so you're investors. So you have the right to all this information. And to me, if a startup's not giving you that information, that is a red flag in and of itself. Yeah, I feel maybe people should think about... that if you join a startup a bit like if you put in like a bunch of your money like a significant amount of your savings and
When I did angel investing, if I didn't get information, I mean, you can still put it in and you can hope for the best, but I think that's called gambling, to be fair. It is. And so that's okay. But then just be honest with yourself. If I'm not getting this information, I am gambling my... my most valuable time and very valuable years of my life and that's okay right it could work but it's maybe not the smart way
Exactly. And as engineers, we have, when we're recruiting, we're leak coding, we're doing system design. It's hard to carve out that time to do diligence. And so it's something I think we don't talk about enough. I will say that as a hiring manager, even as a manager. When you join a company and if you've previously done your dude's diligence.
you will have a better start people will remember you saying oh this is this person who actually cares about the business cares about where it's going cares about how they can contribute so on day one you're already not just seen as like oh you know like new starter xyz but like oh
Like, this person has drive. Like, I think that comes across. And honestly, if a company is not happy, you just trying to understand the business, see how you can fit in, it's probably a red flag itself. Let's be real. Yeah, that's true. That's fair. So at Coda, you joined as a software engineer. And you then transitioned into, you know, I looked at your LinkedIn to AI engineer. How did that happen? And how did you make that happen? Because it sounds like you actually had a lot to do with it.
So if we rewind to end of 2022, that was when ChatGPT came out. November, oh yeah. Yeah, big milestone. And, you know, Coda saw the amount of love that this product was getting. And Coda was starting an AI team with two engineers to build an AI assistant to help you build your Coda documents.
At that time, I asked that, hey, I'd love to be a part of it and got a very polite no. So I thought, no problem. I'm just going to start building in this space anyway in my nights and weekends because this technology is very cool. The first thing while I was learning was trying to answer to myself, how does ChatGPT even work?
And through that went down a rabbit hole of self-studying the foundations of deep learning. So starting off with the very basics of what's a token, what's a weight, what's an embedding, to then understanding that, okay, LLMs are just next token prediction.
Going through the history of different architectures, of how we went from RNNs to LSTMs, and then building my way up to the transformer and understanding that, okay, it's positional encoding and attention that has allowed us to scale up in such a good way.
What this did for me was to just give me some intuition of how this technology works, which gave me a bit more confidence. After having built that foundation, I wrote about it in my blog, and so my team was aware that I was working on this as well. I started to build on top of these models. So I went to a lot of hackathons. My favorite one was a way to learn languages while watching TV because that's the way that I learned Hindi and I wanted a way to practice my Mandarin in that way.
When I was building and doing hackathons, I got a sense of how to actually use these tools. So after five months had passed, When I asked again to join the AI team, I got very much a heck yes, come join us. We see that you truly care about this because you've been working on it in your free time. And that's when the real learning started because...
Hacking on top of models in your free time is very different from trying to build it for production, especially because as engineers, our role is to build reliable systems, but you're dealing with stochastic models. So they're very much at odds.
with each other. And when you say hackathons, is this, was these like weekend hackathons, you know, the ones that anyone can attend, you register and like, especially they were popping up because of the AI, you know, like hype basically starting? Yes, weekend hackathons. I also... did so the project i was telling you about
That was a language learning tool that was with an online hackathon for six weeks with this company called Buildspace. Anyone can go join. And the way you win in this hackathon is not by what you build, but how many users you get or how much revenue you're generating.
So it's such a fun way as an engineer to not just build something, but actually go and try to get people to use it. So it was a very fun learning experience. And because it was all online, they really encouraged us to build in public. And that in and of itself was a great learning. I love it because I think a lot of times when a new technology comes out and a lot of engineers, especially you had a day job and the people who have a day job, the biggest thing is like, hey.
I can build something on the side, but what should I even build? I mean, you know, like it.
it feels it's kind of pointless like you can do tutorials but especially in this case there's not many and tutorials are kind of not there so i love how you found a way to have a goal to enjoy it to do you know scratch your own itch as well and and combine it so like maybe these online hackathons or like hackathons happening around you it could be a great way to do it and it sounds like it actually helps your professional like you
help help help even your company and and your job because now knowing how to use these tools was very much in demand it still is but there were not many people who who were like as enthusiastic and as as self-taught one thing that I learned from that experience was don't wait for people to give you the opportunity to do something. Just start working on it. I love this. This is such a good mindset. So when you join this team, so.
Technically, did you become an AI engineer? What do you think even an AI engineer is? I feel it's this kind of overloaded term. So I just love to hear like how you think about it. AI product engineer is building products on top of models and the work. entails first a lot of experimentation of this new tool came out experimenting with what you can build to solve real customer problems, prototyping it.
And then from there, going and actually building it from production. So at its core, it's very similar to software engineering. There are some domain-specific things like learning how to fine-tune, learning how to write good prompts. learning how to host open source models. But in and of itself, the foundation is very much software engineering. Yeah, and I guess, you know, I guess evaluation is still is also a big one. Yes, that's a great one. Write and get evals.
uh and then like one thing that was very surprising for me to to learn when i talk with a friend who works at a startup is how their tests suite costs money to run every time the eval suite they're like i don't know like how many like 50 or something like that and it's like
oh you know when i run my unit test like it costs time and effort but but it's it's free it's just time and now you actually especially if you're using an api you have this cost which is i think refreshing and just a good way to think about it And it just forces you to adapt. Yeah, for sure. It's very interesting because there's no good way to measure the accuracy of a non-deterministic model without using LLMs. And so at Coda, we used to use brain trust. And it was so interesting.
how the model is being used to check whether or not it's working correctly. As you're just going deeper and deeper into... the AI field, what were resources that helped you? Was it just pure self-learning? Was it going to the source of where the papers are? This is a really ongoing question because the industry is not slowing down and there's not many kind of books or static resources out there. Yeah, very fair. Because things are changing quickly and there aren't static resources.
At that time, and still true today, I found it most helpful to learn by just doing. So even when I was on this team, I'd go to a lot of hackathons, internal to Coda and external. I remember there was an internal hackathon at Coda. where it happened to line up with the day OpenAI released function calling for the first time. And so our team, we played around with the power of function calling, which is a very important tool, by turning natural language prompts.
into appropriately identifying what third-party code integration you should use. So for example, a user types in, how many unread emails do I have? And it should appropriately pull out. the Gmail pack or Gmail third-party integration that Coda had. At that hackathon, playing around with embeddings from Pinecone to see can I more accurately pull out the right...
third-party integration. So that was one way through internal hackathons, but there were also external hackathons. I remember in SF, when Llama 3 first came out, they were hosting a fine-tuning hackathon. So I went.
the beginning they tell you what is fine tuning how to use it which is great then there are a bunch of startups there that are building fine tuning platforms so they give you free credits to go fine tune and so Then I remember building on top of Replicate and fine-tuning a way to turn Llama into Coda formulas, which is our equivalent of Excel formulas.
So learning by doing, to me, was the most effective way when things are changing so quickly. And even though hackathons are the most effective, you know, reading blogs. Some paper, Twitter to see what other people are doing did help. There are a lot of open source companies. I remember back in the day, Langchain had lovely documentation on how to do WAG when it was first getting popularized. And so reading what other people are doing.
even though they're informally written, it's not a textbook, it's not a course, has been very informative as well. Nice. Well, yeah, I guess this is so new. You just need to figure out what works for you. and just try a bunch of stuff and see what sticks. And also it changes, right? So like whatever works now, it might not be as efficient later. So totally. Yeah. And there are books coming up. I remember you interviewed Chip and she has a lovely book on how to build as an AI engineer.
Yeah, yeah. She actually captured a lot of the things that are not really changing anymore. So that's also changing. And I think, you know, we'll now see courses come out. Andre Carpathia is doing some really, really in-depth courses if you have the time, which honestly, it doesn't sound like a bad time investment to do so.
Yeah, exactly. With Zero to Hero. So at Coda, what was your favorite project that you built using AI tools or your favorite AI product? A project that's very close to my heart from Coda is Workspace Q&A. So maybe to set some context, at Coda, a very common customer complaint was that I have so many documents with my internal know-how of company documentation, but it's hard to find that document when I need it.
And about in November 2023, WAG was getting popularized, retrieval augmented generation. And it struck our team that we actually had all the tools in place to build a chatbot that would solve this problem. First, we had a team that had just redone our search index, and they put a lot of hard work into redoing that search index. Second, we had the infrastructure in place to call LM tools reliably. And third, we had a chat bot that allowed you to...
In your Coda doc, chat with an LLM. With those three things, I was able to just glue them together in a couple days and build a version one of a chatbot that lets users ask questions about the content of their workspace. Oh, nice. So I put that, you know, on Slack with a loom. And to my surprise, our CEO Shashir started taking interest in this and responding to that thread. He saw... a grander vision where Coda could create an enterprise search tool. So it's not just searching documents.
but all third party integrations, which Coda had a lot of. So ideally, you know, a sales team should be able to come in and say, what's my projected ARR for an account? And it pulls in from your Salesforce. integration and answers that question for you. So that was exciting. And he basically tasked a couple of us to experimentally prove out that Coda could build this in four weeks. Oh, nice. And a good challenge. Yeah, it was a good...
daunting challenge. It was me, my manager, the CTO, a designer, and a PM. And it was short deadlines and high stakes because this was going to be demoed to important people. It was very much all hands on deck. On one day's notice, we flew to New York to hack together. And it was nights, weekends, whatever it took to make this work. It was a very exciting time. And I think a lot of...
blood, sweat, and tears behind it. But the TRDR is that it did go very well, and it became the birth of a second product for Coda called Coda Brain. From January to... June of 2024, we had a much larger initiative where 20 people were now working on it. And it was taking that version two that that small team we had built and making it a more robust thing, which is a very hard challenge in and of itself.
And the cherry on top was that Coda Brain was presented at Snowflake Dev Day at the keynote. So it was just a very exciting time to be a part of it from day one and the world getting to see it at a large scale. yeah so i'm just like taking notes on like how amazing it is that you know you join coda as a new grad with like no experience and
AI engineering and just frankly, you know, you had less experience than like a lot of the experienced engineers and software engineering. I mean, just the years of experience, but from.
from the first day like you just kept track of the industry you saw this exciting thing is coming out chat gp you tried it out you were convinced this this is this is going to be interesting and fun you asked your your manager when coda started a team to join they said no and you just went and learned and in a matter of few months you probably leapfrogged a lot of the people who were just kind of waiting or you know not not necessarily uh like
being as as active as you are you you got onto this team as an early engineer and you know a year later now when 20 people were working on this with coda you were still like one of the earlier ones so it just like shows me how like what you were saying not waiting for permission really pays off and you can just do things you can learn things and especially for for an innovative technology like ai and yeah whatever we see next it's actually valuable like the companies
Like a company will value this kind of thing because it helps them and they want, they desperately need people like you were in this case or other folks who are doing similar things. What is really cool is that it's so new, so it definitely...
levels the playing field between all sorts of seniorities because nobody knows what the right way is. And so we're all just figuring out together. And that's what makes it definitely more exciting. Yeah, I feel there's like two things here. Like if you're someone who already has some experience.
may that be one year or 10 years or 20 years, that experience will eventually be applicable. Once you understand how this works, you can take that past experience and see how it applies. And if you don't have experience, it's actually not a bad thing because you're coming in with a fresh mind.
you will probably you will not have some of those biases of know for for example a lot of software engineers who have like 10 plus years of experience they will know uh who built production system that unit testing and automated testing is super efficient and a very good way to do stuff now with ai systems
it's not necessarily the case when they're non-deterministic and things like for large-scale systems, things like monitoring or checking evolves might be a better way. I'm not sure which one it is, but not having that bias could actually speed you up. So like either way, it...
Doesn't seem to be any downside in just figuring it out and mastering this tool because it is a tool at the end of the day. Yeah. It's a new tool in our... It's honestly a magical superpower because now it just unlocks so many things that you can do on top of it. Yeah, but I feel it's a bit like, you know, the Harry Potter one. Like when you watch the movie, it's like...
You know, at first it sounds magical when you read the book, like you can do all these spells. But if you're a hardcore Harry Potter fan, you will know that there's only certain spells that you can do. And, you know, there's a certain thing that you need to say. And so there's a there's a whole mechanic around it. And like for every.
fantasy book as well when there's a magical world like there are the rules and there's people who can master those rules and it feels a bit the same list right it's at first it's magic but actually it has the rules and once you learn it you can you can be this you know sorcerer who can Yeah, exactly. This episode is brought to you by Cortex.io. Still tracking your services and production readiness in a spreadsheet. Real Microsoft Service is named after TV show characters. You aren't alone.
Being woken up at 3am for an incident and trying to figure out who owns what service, that's no fun. Cortex is the internal developer portal that solves service ownership and accelerates the path to engineering excellence. within minutes, determine who owns each service with Cortex's AI service ownership model, even across thousands of repositories. Layer ownership means faster migrations, quicker resolutions to critical issues like log4j,
and fewer ad-hear pings during incidents. Cortex is trusted by leading engineering organizations like Affirm, TripAdvisor, Grammarly, and SoFi. Solve service ownership and unlock your team's full potential with Cortex. Visit cortex.io slash pragmatic to learn more. That is C-O-R-T-X dot I-O slash pragmatic. So then you had a really great run at Coda and then you did something like.
You decided to look around the market and you blogged about this. You interviewed at 46 companies. Did I get that right? Yes, but there's context behind that. I'd love to understand how you went about interviewing, especially specifically for an AI position. What did you learn about?
what the market is like, what interviewing is like, what the whole scene is. And if you can give a little context on where you did this in terms of location-wise, types of companies, just to help us all understand this. Sure. maybe just by giving a little bit of context it was over a six month period and the first half i wasn't closing them i was getting no's as i was getting revved up on my leak-coded system design prep after that the interview process did take longer than i expected though
Because the AI space is especially noisy right now. And when I was trying to do my due diligence, like we were talking about earlier. There were often open questions that made me feel uneasy about the growth potential. And the advice I got from a mentor was that if it's not, heck yes. And if you have savings, don't join. It's not fair to the company or you. So that was how I thought about this. In terms of.
the space, it was clear that there are the product companies, infrastructure companies, and the model companies. I found it helpful to put companies in each category and Figure out which segment you're most excited about to help narrow down the options, given that there's so many AI companies right now. Could you give just an example of each, especially with the infrared model? I think it might be a bit.
I'm interested in how you're thinking about that. Yeah. Product companies are the companies building on top of the model. Here I think of Cursor, Codium, Hebbia. Infrastructure companies are the companies building the tools.
to help ai product companies effectively use llms so whole suite of these there are the inference providers like modal fireworks together Vector database companies like Pinecone, ChromaDB, Weviate, eval and observability tools like Braintrust, Arise, Galileo, and a whole other suite of products.
And then there is the model companies, which are the base of the ecosystem building the intelligence. You have the big tech companies like Google, Meta, building models. And then you also have startups like... Not startups. You have other big, smaller companies like OpenAI, Anthropic, building models as well. I think it's a really good way to think about it. And again, I don't think many of us have verbalized it like this. This also goes back to...
not many people have necessarily gone through. I will say this is not something that I came up with myself. Yash Kumar, a mentor, he pointed out that you should look at the space like this. And that's how I think about it now. Wonderful. And what did you learn about like...
each of these companies in terms of the interview process what the vibe was like like generally and also like how how you personally felt about it because like as i understand where you were uh coda we can put them in the product category Sorry, the product company category. So for me, in trying to be more focused in my search.
I decided to focus on model and infrastructure companies because I wanted to keep getting breath in what I was doing. And I felt like the product companies were too similar to my experience at Coda, which was phenomenal, but I wanted to keep growing. And that definitely...
The trade-off was that it's a bit more of an uphill battle because the work that I had done was not as relevant to model or infrastructure companies. In terms of the vibe, I think all of them are shipping really fast, have really lean teams, and...
are out to win so it's a very exciting time to be looking at them questions i would ask myself when i was trying to figure out is this company viable in the long run on the infrastructure side was are their margins high enough given that so many of these inference providers
are also paying for expensive GPUs. So what is the margins here, especially when a good software business should have about 70% gross margins? And how easy is it to build infrastructure in-house? You know, we know this, but engineers are hard.
group of people to sell to because if it's not complex enough, if it's too expensive or doesn't work exactly how they want, engineers will just build it in-house. Google is a phenomenal example that's built so much in-house. So that's how I was thinking about the infrastructure companies.
In terms of the model companies, I was just trying to get a sense of if they're training frontier models, can they afford to keep training them given how expensive it is? Are they staying ahead of the open source competition? Because if they're open, wait. that exists for a model. No one's going to want to pay a premium to get the model from a closed source provider. It's a sad reality. It is. And I think that it's interesting because today product companies are still willing to pay a premium.
for the best model, even though an open weight exists, as long as the closed source provider is ahead. Yes. And anyone who's nodding along, when they'll find themselves evaluating an offer or a company and trying to understand the margins, that's a hard one to do, especially as an engineer. Yeah, exactly. Where did you get data or did companies answer some of your questions?
on the unit economics is these are things that companies like to have under wraps, even as someone who is covering sometimes these companies or just interested in this space. Even publications, like financial publications, you know, will just kind of wave their hands because it is hard. Like this is the big question. And these companies, they want to hide these things from the casual observer for sure. Exactly.
I think it's totally fair for a company not to share this information until you get an offer because it is sensitive information. I do think once you have an offer, it would be irresponsible for them not to tell you when you are as an investor as well. And you sign an NDA, so you keep it to yourself. So I do think they should tell you.
For questions or for companies in the inference space, I would just ask, you know, how much money do you spend on the GPUs? And then how much revenue do you have to make rough back of the envelope math of what those margins are like to just get some sense of the business? And then I also found it helpful to read some news providers like the information that does very good diligence on the economics behind different startups in the AI space.
And if I could, I would try to also ask investors who have invested in these companies or passed on investing in these companies because they see the deck come to them. So they have a lot more insight on too. what the business really looks like. You're talking like an investor or like how a senior executive would do it, which I love. I think more people should be doing this, by the way, and not enough people are doing it.
It's just very refreshing to hear. And by the way, the investor store is interesting because in my experience, investors, when you are applying to a company that they're an investor in, they actually want to help close great people. Yes, exactly. happily connect and then you also have a connection where a few years down the road that investor might reach out to say oh i remember you're you're a business-minded engineer
you know, like in the future, it's hard to tell. I think we were talking about this before, what will be in the future, but there will be only more demand for software engineers who not only know how to code, but are curious about the business, can communicate with users, et cetera. So... you'll now have a network a stronger network so there's only upside in in doing your due diligence it can actually help your career that's true and i
100% agree with investors being very generous with their time in wanting to chat with you and explain to you how the business works. So that's been something that's been really fun to do for sure.
and then uh just going back to like this is all great when you get an offer but how did you get to getting an offer like what what did you need to brush up on in terms of interviews was it the pretty typical you know tech interviews even though these were for ai engineering roles of the the lead code system design or for this some ai specific things you know what what helped you go from initially you stumbled and you didn't get too much to like okay you actually like we're getting offers now
In terms of the interview process, I definitely thought it was all over the place as the market is... trying to move away from leak code but still asks leak code so then you end up having to study leak code as well unless you know exactly where you're applying to so there were coding interviews system design and then projects coding
was a mix of data structures and algorithms where the best way to do it is leak code. Luckily, neat code with an N now exists and he has phenomenal videos. So that was great. I believe in doing space repetition. So doing those questions a lot of times. Then... There were front-end questions because I'm a full-stack engineer as well. And I found that there was this resource, The Great Front End, that had lovely interview questions for the obscure JavaScript questions they sometimes ask.
On the back end, that one, I just more relied on things that I had done at work for those interviews. That's the coding part. The system design part, I thought Alex Hsu's system design, his two books, phenomenal. Just reading those, really understanding them, doing them again and again until you...
understand why things work a certain way honestly i love system design interviews they're really fun because you learn things outside of the domain that you're in as well and then there are the third type of interviews which is project interviews where go build something in a day and those are
Probably my favorite out of all of them because you get to show how passionate you are about that specific product and you can actually show what you can do. I do hope that as an industry, we move away from LeetCode and instead move to just project interviews.
Reading code, which has become way more important today, as well as debugging code. But I think we're kind of in the interim where as an industry, we haven't fully formed an opinion here. And then most of these interviews was at the end of last year, so end of 2024 or so. Were they remote or were some more in person already? I was in between around June of last year and a large chunk were remote, but they were definitely.
interviews in person as well which i enjoy because i was very much optimizing for companies that are in person yeah we'll we'll see but i think we're sensing a trend or i'm sensing a trend in-person injuries might be starting to go back, at least your final rounds, which, by the way, it might not be a bad thing. I mean, it's interesting because before COVID, when I spent most of my career there, it was just in-person.
there are so many upsides right you do meet the people you do see the location oftentimes you meet your future teammates and for example for me i i once in london i had two offers between two two two banks and one case i met my future team the whole team and then when i didn't meet my future team it was just like they said like you will be assigned a team
and i actually chose it was a lower salary but i chose a lower salary because i really like the people and you know like we just kicked it off it felt like a good connection and back then i went through a recruiter so the recruiter negotiated the same salary for me which was kind of a win i i guess but like there are like I know there's you know like it's always we will hear people like mourning the end of or fewer remote interviews but there are all these upsides which
when you're committing to a place for so many for hopefully many years you want to have all that information 100 definitely i think it's energizing on both ends for sure it's a great point and so in the end you joined open ai right Yes, I did. Congratulations. Thank you. And then can you share on what kind of general work you do at OpenAI? Sure. So I work on safety as an engineer.
at OpenAI. And OpenAI's goal and mission is to build AGI that benefits all of humanity. On safety, we focus on the suffix of that statement, so benefiting all of humanity. Some things I work on are A, small, low latency classifiers that detect when the model or users are doing things that are harmful so that you can block life. So that means the training, the data flywheel, hosting these models to scale. Second thing that I get to work on is measuring.
when the models are being harmful in the wild. And there are a lot of dual use cases over here, but really trying to get a sense as these models become more capable and people are figuring out different ways to jailbreak them and exploit them.
are those unknown harms that we don't know of with more powerful models and then distilling it into small classifiers there's also on my team a lot of safety mitigation services that we own and so part of our work is to integrate it with all the different product launches and as you know there are a lot of different product launches that definitely keeps our team busy
And that's just the tip of the surface. There are a lot more things that we work on in safety. I mean, this is like, it sounds very interesting because when I worked on payments back at Uber, we had a team called Fraud. And oh boy, they had so many stories. Like I just talking with them, I... like you would think you know like payments pretty simple like oh you just need to pay but then the edge cases are always interesting with every
every area and the same thing i guess with l i mean elements are not as simple but once you realize how they work next token prediction it sounds pretty simple but then the edge cases and all the things that could go wrong etc it sounds like you're kind of in the middle of that like having a very like good vantage point and actually
in the details. You've now worked at Coda, you've entered at Google and Microsoft, and you've talked with mentors about what other places are. What are things that you feel that are just very distinctly different about OpenAI compared to other companies? I think what makes OpenAI unique is the mix of speed and building for scale. You know, at startups, you get that speed of iteration and it's so fun. And then at bigger places, you get to build for scale.
But OpenAI is in a very unique spot where you have both at the moment. Things move really fast and you have huge amounts of users. The service that I work on, you know, gets 60K requests per second. And you just think. Normally you get one or the other and it's really fun to get both. Second thing that I think is quite unique for a company of this size is the open culture. People are very open to answering questions on...
why and how things work a certain way. So it's a great place to learn, which is something I didn't realize from the outside. And then third, people are just very passionate about the mission, work really hard. And I don't think this is unique to OpenAI in and of itself. All companies, I think, where great work is happening, people are like this. But it's just never a boring day in the office because people care so much and are constantly shipping. Yeah. And then talk about shipping.
I'm assuming you've shipped some things to production already, but how can we imagine... a thing a project an idea making into production right like there's uh there's a very bureaucratic companies you know i don't want to like say old microsoft maybe maybe not today but where there's like you know like very strict planning process then jira tickets
are created by the pm the engineers have to pick it up then someone else might actually deploy it so like this is the super like old school and slow and and the reason why some engineers don't like it what is it like you mentioned it's fast but What was your experience in getting things from idea to production? And is it multiple teams? Can one person actually do it? Is it even allowed? I don't know. I think it's very much allowed and very much encouraged. There's been...
publications of how deep research came to be, where it was an engineer hacking on something, presenting it to larger C-suite and now becoming a full, very impactful product. So it's definitely encouraged, which I love. I too have had a similar experience and it's very encouraged to come with ideas.
and actually drive them forward. Just strictly from your perspective, what do you think like one thing that stands out that open AI can actually still ship so fast? Because it feels it defies a little bit the laws of a growing organization.
eventually slows down at one point i'm sure it will but there's no signs of this happening so far my observation is that the systems are built to enable you to ship fast and they give engineers a lot of trust even though it comes with the downside of sometimes that can lead to outages to put this very concretely when i joined
you can make static changes without an approval. So you have trust to go in and flip a flag to turn something on. That's no longer the case. You need one reviewer. The service that I get to work on has 60,000 requests per second, but you get to deploy. with one review immediately. So my observation is that there is truly trust put in engineers to work quickly and not have a lot of red tape around shipping fast.
yeah i think this just goes with uh kind of unset expectation that expectation will be very high of the people who come in here because you cannot hire an engineer who is used to you know being
only doing a small part, not used to thinking about the product and the business impact and all those things. So I have a sense that what you're doing, it might be kind of a given for you, but in the industry, it might be more common to expect that engineers are just wearing, you know, we used to call it wearing more hats, but it's just like...
it's just how it is like you you do want to have a you know like you're kind of merging a little bit of pm a data scientist an engineer all in one and these are the type of people who can actually make something like open AI or similar companies like work so well with this many people. Yeah. And I just think with intelligence today, the roles between data science.
engineering, back-end, front-end, PM, blurs so much that each individual, whether in OpenA or not, is expected to do more of that because you can get... help from a very capable model. And I think that makes it very exciting for us because it means that we can truly be full stack engineers and go from an idea to launch very quickly. Absolutely. So what are some things that you've learned about
AI engineering, the realities of it? Because it's a very new field. And what are some surprising things that you didn't quite expect? One thing that I've learned that I didn't realize coming in was how much of... AI engineering is about building solutions to known limitations of the model. And then as the model gets better, you scrap that work and build new guardrails. Let me give you an example from Coda. So...
Pre-function calling days, we wanted a way to get our model to take action based on what the user said. Function calling didn't exist, so we prompted the model to return JSON, parse that, and actually deterministically call an action.
based on that JSON block. Then OpenAI released function calling. Okay, scrap that and instead integrate with function calling. But, you know, back in those days, function calling was not very reliable. And now today we moved from function calling to the MCP paradigm.
So things are changing very quickly and the models are getting better, but they're still not perfect. The moment you get more capability, there are more engineering guardrails you need to build to make sure they work reliably at scale.
yeah and i guess you need to become comfortable with throwing away your work when the model is there you just need to not be as attached to it because i think there's a little bit of this especially when you're used to like things not changing as much as software engineering so just You know, it's not a waste, it's a learning. Yeah, and it's just been easier now to...
or cheaper to produce code. And so you see this expansion and collapse phase happen a lot in where you build a lot of features, see what works and then collapse to what works and restart. There's a lot of scrapping your work as the creation of code becomes cheaper.
It's easier not to be attached when an LLM also helped generate that code. Yeah, I think this will be a big change, a good change once we get used to it. Yes, exactly. Now, when it comes to AI and junior engineers, like you're such a... interesting example in the sense that you you started your career a little bit before ai took off but you also transitioned with like not not decades of experience just yet what is your take on how
gen ai will impact new grads uh people who are still in college because you know there's two takes and they're both very extreme one is the engineers with 10 plus years experience often just feel like oh i feel so sorry for these people like they're yeah they're not gonna get jobs even if they get jobs they're not gonna depend on ai they're not gonna like read the books they they won't know what it was exact back in our day right so so there there's this this
thing uh and and also like some some people are generally worried that well you know you can now outsource so outsource so many things the ai they're thinking okay maybe they can pick up things really quickly but maybe they're not never going to get to that depth now
I think they both are extreme. I'd love to hear like how you see it because you're kind of seeing this firsthand. Definitely. And you're right. From my experience level, I get insight into what both of those engineering lives are like. And currently, I'm not convinced that AI is going to be disproportionately worse for junior engineers. In fact, I think that allows everyone to move higher into the stack.
Be more creative in what you're building. You empower younger engineers to just do more, propose ideas, and actually ship that. I do subscribe to the take that there will be people that use AI to learn and then people that use AI to avoid learning. I would say that there's actually room for both things to exist and you should be doing both.
Personally think that when you're working on a greenfield project, trying to prove a vision that something should exist, why not skip the learning vibe coded to actually get a real product that you can validate and then go run. to build this for real as a new product line. But I don't think you should skip the learning when you're trying to now build a robust system that you are the owner of. Because when shit hits the fan and you're in a sev,
AI doesn't help that much because it doesn't work very well in between at a high systems level and then reading logs. So when you own the code, I think you should use AI to learn to understand all the edge cases of why things work a certain way. So I think there's room for both. It's going to be an important skill for us to learn when we should outsource the doing versus when.
We should use AI to make ourselves better and stronger engineers. Yeah. And I guess like there's probably not too much harm. And if you don't understand it, spend some time to understand it. And AI will help you typically do this faster. So like I. I'm not sure if this is to do with personality or curiosity, but...
We've seen this before, by the way, like when at any time, but let's say 10 years before when I was like maybe a mid-level engineer, like I saw new grads join the workforce and we were now using, you know, higher level language like JavaScript or Python or TypeScript or. Take the example of the recent, a few years ago, like new grad engineers, they start with React. And when you start with React, JavaScript and TypeScript, a lot of people who would...
haven't studied computer science and didn't do assembly or CPC or these kind of things, you can just learn React and you can just stay there and you can figure out how to use it. But the better developers have always asked... Why does it work like this? What happens? What is a virtual DOM? How can I do it? How can I manipulate it?
and and you look at the source code and i feel there's always been the people who do this and they're just better engineers eventually they can debug faster they they ask why and
you know they're still sore so i think in this new world we will just have this and i don't i don't think this trait will will die out in fact you know like to me you're a great proof of you know you you go deep you understand how how the things work and then you think you decide like okay i'm i'm gonna use it to my advantage right now
I just want to go fast because I know what I'm doing already. Yes, I do think that's spot on and that we've had this in the past. It will become even easier to outsource that intelligence.
And in some sense, be lazy. So I think we'll have to just be more intentional as engineers to make sure that we are actually going deep in cases where it really matters. And so far from what you're saying, how... because you've seen before and before gen ai tools you're now working at a you've been a product engineer with with ai you're now working at a model company how do you think
these tools will change the software engine that you have been doing before? And how is it already changing your day-to-day work? In terms of what doesn't change, we still have code as the way innovation manifests. You go from idea to code to iterate. That's the same. As engineers, you still need to know how high-level systems work and design them very well. You have to debug code really well and you have to be able to be really good at reading code. So to me, that all stays the same.
What's changed, I think, is the division of responsibilities between PM designer, software engineer. I was talking to a friend at Decagon. I think they were telling me there are 100 people and they still don't have a designer because product is just expected to do the design as well.
As a software engineer, this has always been true at startups, but now more than ever, you're expected to do product work as well. We talked about this earlier. What also changes is that software engineers become more full stack. You don't outsource work to... Another adjacent role like data engineer, you're expected to build those data pipelines yourself. I also think what's changed is that we need to be better at articulating our software engineering.
architectures and thoughts because you are expected to prompt models to do this. And the engineers that will be most efficient are the ones that can see the big picture. write a great prompt that also catches the edge cases, and then have the model implement it. It's like the best engineering managers that are able to zoom in and zoom out really well. And being able to zoom out... prompt what you need to do but then zoom in when actually reading that code and catch potential bugs instead of just
relying on the LLM to be 100% right in all cases, because there'll be unique edge cases to the system that you're building that the LLM is not aware of. And you need to be able to catch that when you're reading the code. Yeah, I feel like... If you have a mental model, and I see this so much when I'm using these tools, you know, when I'm either like vibe coding or prompting or when I know what I want to do, when it's in my head, I either like.
because i know my code base or i know what i want to do or i just sat down i i thought to it and i drew it out i'm so fast i'm great like and i can switch between you know i might do an agentic mode till i generate it maybe i like it maybe i don't then i just do it by hand it doesn't matter like i get there like i know where i'm going but when i don't
I did this where like, oh, I tried to vibe code a game and I failed not because I don't, I just didn't know what I wanted to do. Like, yeah. And, and, you know, when, when your prompt doesn't tell like, oh, do this, then I don't give it guidance. Yeah, no, definitely. It wasn't the fault of the tool. It was just, you know, I don't know what I expect. Like, how would this thing know? It's non-deterministic, but you need to give it some direction.
Exactly, for sure. And I also, on that point, think that it's great when you're zero-shotting and doing greenfield work, but... Today, and I think this will change, it's not the best at working in large codebases. And as engineers, you are always working in large codebases when you're building things for prod. And so that part of our job hasn't changed.
being able to find the right place the code should go, use the right modules that exist, and piece it together in a larger code base when you're adding a feature.
yeah and also just like simple stuff which we take for granted but like setting up the the tools to like run the tests to to know how to deploy to know how to control the feature flags to how safely put out something so it doesn't go to prod if you want to a b test like which you know when you're on board this is a pretty kind of given but if you work at a place that has like microservices or whatever like it's it's all and i feel like there's so so many other other things but i love
i love how you you summarize what will not change because i think that that is really important and i love how you brought up software architecture i've been thinking about like this recently in fact i've started to read some like
really old software architecture books. Because there are some ideas that I'm not sure will change. I want to assess this theory, but it might not change as much. What software architecture books are you reading at the moment? I'm going through the Mythical Mammoth. I've almost finished this. this is a real one and then i have so this is from the 90s it's called software architecture and it's by mary shaw
and David Garlon and Grady, Grady Booch, who's a legend in software engineering. And I interviewed him. He said that he thinks this is the single best software literature book. Now it's, it's very thin and it's.
it's it's i think from 1995 or so so i i've just heard to read it but i'm interested in what 1996 it was just 30 years ago i'm just interested in what are the things that might have not changed clearly some things will be dated right like they're talking about korba which is like this old java framework that we don't use anymore but some of the other things there's a lot of uh reflection with civil engineering and this book was written when there was no real
software architecture. So they tried to define it, which I'm kind of thinking there might be some interesting ideas. So I'm interested on what has not changed for the most part. Yes. No, that's...
A very nice approach of actually looking at the history to see what's not changed from then to now to then extend that line to what won't change in the future. I'd be very curious to see what you learn from that. Well, and also reinventing, right? Because I feel we will have to reinvent some parts of the stack. And I think it's important to understand.
And also, I feel the past like 10 years or so, we've not talked too much about software architecture. So maybe there is a little bit to learn from other people's ideas. So to wrap up. How about we just wrap up with some rapid questions? I just asked a question. Okay, let's go for it. So first of all, what is your AI stack? For coding, cursor. For hackathons, deep research to get a sense of what libraries already exist.
ChatGPT is my default search and also my tutor and some internal tools to quickly do rag over company documentations when I'm trying to find something. So what is a book that you would recommend and why? Book I'd recommend is the Almanac of... Neville Ravikant. I've read it a couple of times. Big fan of the way he talks about building your life, both from a very pragmatic way about how you should do it from your career, but also in terms of just how to be happy.
And what is a piece of advice that made a big difference in your professional career? Don't wait for someone to give you the opportunity to go work on something. Go work on it. Love it. So, Janvi, this was so nice to have you on the show. Thank you so much for even having me in the first place.
Thanks very much to Jambi for this conversation. To me, talking with her was a great reminder of how in a new field like Gen AI, years of experience might be less relevant than teaching yourself how to use these new technologies like Jambi has done so. It's also a good reminder of how it's never too late to get started. Chamvi thought that she was late in 2022 because she was five years behind every AI researcher who's been using Transformers since it was released in 2017.
And yet, Jamvia is now working in OpenAI, the company that arguably made the most in utilizing transformers and LLMs. For more in-depth deep dives on how OpenAI works, coming from the OpenAI team, and on practical guides on AI engineering, check out the Pragmatic Engineer Deep Dives, which are linked in the show notes below.
If you enjoyed this podcast, please do subscribe on your favorite podcast platform and on YouTube. A special thank you if you leave a review, which greatly helps the podcast. Thanks and see you in the next one.