My name is Koen . I'm at Oracle CloudWorld and NetSuite SuiteWorld in Las Vegas this week and we're now talking to Brian Chess . He's SVP Technology and AI at Oracle NetSuite . Welcome , Brian , Thanks . Good to be here . I would like to talk to you about the development of AI within SaaS solutions and the challenges and opportunities that bring to software .
Maybe we can start a bit with what your role is at NetSuite , because your title says a lot , but maybe it's better if you explain in simple words what you do all day .
Well , so I started at NetSuite a long time ago .
I actually started way back at the beginning in 1999 , when it was NetLedger and I was a software developer and , through a whole lot of changes and a little bit of coming and going , once upon a time I was in charge of cloud operations , which is everything about how we deliver the NetSuite service , and I think it was a I don't know if it was a natural
outgrowth or a less than natural outgrowth from that where I started doing AI work , because it was all about the data , and in cloud operations we were taking care of the data and in cloud operations we were taking care of the data .
And then , since then , I've actually started working on the NetSuite foundation , which includes the NetSuite platform too , so everything related to how people build their own applications and extend at NetSuite , connect at NetSuite to other applications .
Yeah , with the ISV partners and stuff , probably then .
True , yes , so the platform is how we connect to those ISVs .
Okay , and these days you're primarily working on AI , I'm guessing .
Quite a bit of energy and attention go into AI . I mean , there is the core of AI , there is the algorithm , there is getting the data there , but then there's a lot that you need to do in order to attach the AI to the product and make it actually useful for people yeah , so how should we look at it ?
are you working on AI features , or are you even down to the core technology and developing I know , industry or domain specific LLMs , even , or so as part of Oracle , we get to work very , very closely with the Oracle Cloud infrastructure teams , and so we get to look at it from the application layer .
We're not developing our own foundation models , for example , but we are giving feedback to those foundation models about what we want and what we need . So Multilingual is a fantastic example where we got started a year ago with Gen AI . It was good at English , it was not great at a lot of other languages .
That progression we've come a long way in a year and so now multilingual support is something we're pretty good at , and part of that is because of the feedback that we provided to the OCI GenAI team .
So you're more focused on the application , so you're talking to all your product managers how you can develop AI features to make the product even better .
That's true . We do develop models of our own based on customer data .
So for , for example , the model that we use to figure out how you might upsell a customer what else they might want to buy , that is a model we develop based on customers data and on their customers behavior , and it's unique to every customer that we have , so we're not combining data between customers no , okay , of course you name an interesting point because
you're generating summaries and generating tables and graphs etc .
Based on the new Gen AI stuff . When I talk to the big vendors that are developing those foundation models , they always say they're really good at generating text and understanding text and creating things , but they're really bad at math .
So I wondered because NetSuite has a lot of financial products and all those product managers are coming to you probably like , hey , can you add some AI flavors to this ? But if they're bad at math , how do you work with them ? Or is that a challenge ?
So the large language models used to be pretty terrible at math . Now they're not so terrible . They're still not good enough for us to turn the math over to them , though , so , yes , we don't want to depend on a large language model to do math .
One thing they have gotten a lot better at , though , is use of tools , so in Suite Analytics Assistant , for example , we're using a large language model , so the user can tell us what they want , and then , instead of having the AI add up the numbers , the AI knows how to talk to the NetSuite application and say give me a picture that kind of looks like this ,
and then all the math gets done correctly there .
Yeah , because a financial reported bad math is not good for business . You have developed many AI features in the last couple years . I don't know how long you've been working on it , but probably a little bit longer , but Is there one you worked on .
That was a real challenge and you're happy that it finally came true and it finally so we have been working on AI for a number of years and the main thing that I see is that our ability to deliver AI features gets faster and faster and faster .
Sometimes it's because we're using a foundation model and so we can plug that model in and it's relatively easy to adapt to the scenario , but also we're getting better and better at developing our own models and figuring out what does it take to deliver AI quickly , to the point where , when we start sit down and start to plan out sweet world .
One of the questions we're asking now is are we overwhelming people ? Like ? There's so many different directions that you can go .
We're starting to need to guide people a little bit in terms of how do they adopt , and so we just added this advanced customer support AI playbook so that we can guide customers through how they can take advantage of the AI features , because the the menu has gotten long enough that people want some help .
Yeah , there are many AI features . Adoption Adoption is a thing because you're changing business processes by adding AI , so people have to work a bit differently .
Sometimes yes , sometimes no . So if you are going to build a customer churn prediction model and so you want to look at which customers are at risk , that might introduce a new process .
If you're using something like text enhance , where all you're doing is you write what you want , then you press a button to clean it up , that's not really much of a process change . So I think you're correct that adoption is an issue , but it's not in every case of AI . It's only in some of them .
Do you measure adoption or user experience how people are using the AI features you've built over the last year ? Oh , absolutely .
I mean , one of the reasons why I got into doing AI at NetSuite is because I'm a believer .
I'm a believer that there is true business value here , extreme business value here , and we're going to unlock it a piece at a time , so it's going to be an adoption curve , but we certainly have to keep looking at what we've built in order to make sure people are actually benefiting from it and using it .
Oh yeah , because if they don't use it yeah , so maybe there are two problems there .
So first of all we've got to get people excited enough about it that they try it , and then we've got to make sure that it does what we promised that it's going to do and that value actually arrives .
So you look at when people delete the outcome and make their own outcome or something , or how do you measure that ?
So , because it needs to be successful as well , so text enhance is my favorite example here , and that is we built the feature and we thought we were done because we've done the hard part , and that is we tied a language model into the application . We're like cool , and then we tried it out on some people and realized the application .
We were like cool , and then we tried it out on some people and realized we had to add an undo button . And the reason we had to add an undo button is because people were taking a risk when they used it . We wanted to say this is safe , it's okay , you can try this , because if you don't like what happens , you can just undo it .
Well , it turned out that undo button also gives us great insight into are people liking what they get ? If we see a lot of undo , we know maybe the right thing is not happening .
Okay , does that also influence your whole thought process and when you think of new AI features ? Because I think that can be really hard . You know , oh , let's add . Ai is a very simple statement , but finding the right feature with the right AI Model and the right functions is a whole process .
Getting everything to come together . So we need the underlying AI technology , we need a place to put it in the application , and then we need a way for the users to actually Connect with the stuff that we're doing . So all three of those elements are critically important .
Yeah , I Think you're in a leading role at one of the bigger sauce vendors and that's with this and especially Oracle . It's one of the largest software companies in the world .
Which really seems strange to me given where I started , where it was just a couple of guys sitting around in an apartment .
Yeah , but I can imagine . But you probably have way more opportunities than your peers that work for a lot of smaller SaaS vendors . Are there like learnings that you have that you can share for them ? Maybe that they should focus on certain things and stay away from other things , like developing your own LLM is kind of a takeaway .
Don't do that if you're a small company , because I hope that's a pretty obvious one because the amount of money that it takes to train one of those foundation models is pretty extreme . Yeah , okay , but what would I ?
the advice that I would give to technologists who are trying to build for , for business features for the software so ai features can be a little bit tricky because sometimes the first time you try something like you try creating a model out of this and that because it sounds good , when you're doing this , when you're doing kind of traditional software development ,
you have a very good guarantee of an outcome . You know that thing is buildable With AI . Sometimes the first thing you try doesn't work and I guess what I would say is don't give up that it can be done , it is being done , and stay at it .
Is that the prompt engineering part you're saying that can help a lot .
Okay , so prompt engineering is a place that you would go , or am I talking about RAG in this case ? Well , rag is a perfect example of something that does not always do what you think it's going to do the first time , but it really can be made to work .
We've got great demos out on the show floor showing RAG at work , but it wasn't so hot the very first time we turned it on . It took some playing around and we're in a new frontier , and so there's not a book you can go buy that will tell you exactly what to do , but the payoff is still worth it .
Okay , so keep on developing and keep on testing . And how do you make sure it's ready for production ? Because you can get , I don't know , 100 outcomes and if 95 are correct and five are a bit , do you want it to production or do you think ?
Testing AI-driven software is more complicated than testing traditional software ? Yeah , because then ?
you have an if-then-else model and it's kind of we have all the scenarios , but AI is a bit more unpredictable .
If-then you say , okay , I'll test the if and I'll test the then and now . We've tested this thing , the AI . It can be difficult to know . Well , if I did things a little differently would I get a very different output .
So we've invested significantly in testing internal testing frameworks in order to make sure that we've characterized and let's say , we upgrade from one LLM to another LLM . Are we going to get what we want out of that upgrade ? So I think that's a really good example of what's on the frontier that testing and quality problem .
Yeah , netsuite is currently focusing on generating text and images and charts and you're combining it with more traditional AI to find anomalies in reports or financial transactions , and you combine that .
I've been getting some reports from vendors that are now beginning to talk about agentic AI , which is , for our listeners , that an AI can take action and make decisions based on their learnings . Is that something you are researching as well , or do you think that's years away from being in production ?
Oh , you can see that we're actually already doing things that qualify as agents , and Suite Analytics Assistant is a perfect example of it , where the AI is looking at NetSuite as the tool that it's using .
So that idea that the AI is going to talk to other tools maybe even talk to other AI in order to come up with its final result I think there's a tremendous amount of promise there .
Rag is another example of something where I think we started off in the industry maybe a year ago , very focused on RAG , and now we're saying , well , really , that database is just another tool that the AI can leverage , so the horizons have broadened in that sense , and I think they will continue to do so .
So , yes , I think there's a good future in agents and we're already getting started .
Okay , so more to come on that front .
Oh definitely .
Are you happy where NetSuite is at this moment on the AI front , or did you hope to be even further ?
to be even further . Well , we can always hope for more , and I think the job of the technologist is to make sure we are always on the frontier , so there's no such thing as too fast .
But you see many vendors in the industry building a lot of AI solutions , and you're probably looking around as well and you can decide for yourself Okay , we're , we're on top of it , or ?
you've seen some vendors who have really Decided to charge ahead and have had some pretty public Missteps and so I hope we're not going so far , so fast , that we're going to experience any of those . We want it to work and work well , because that's what our customers expect from AdSuite .
And how far along are your ISVs by adopting AI that you offer them , because they can use the Prompt Studio as well to build functions within their software .
So we are just announcing Prompt Studio at SuiteWorld this year . We're just announcing our very first SuiteScript AI .
You don't have any partners that already worked with it in a private beta thing or something .
Private beta . They've already built some stuff and , in fact , out on the show floor . I think we've got half a dozen partners who have built their first applications , but that's all relatively new . I think we're going to see tremendous amount of uptake there over the coming year because we're making it public now .
So you're hoping for them to learn new some things as well .
Oh , absolutely . I really want them to teach me some things , and I think they're going to . I think they're going to do some creative things that we never thought about , and the reason I think that is because that's been the history of the NetSuite platform .
I think it's the history of development , where you give developers tools they give you back applications or features you're like oh never thought of that , but quite let's take it because it's good , I'm excited to see where they take us . Okay , interesting . What are you most excited about what you have built ?
and delivered . I'm afraid we were just talking about it . What I am most excited about is the platform capabilities , the things that are going to allow customers to build things that we haven't thought about . Maybe I'm asking for a free ride there , mm-hmm .
I don't know .
But I think they're gonna do some really cool stuff , and it's one of the advantages that NetSuite has with 40,000 customers , we actually get creativity coming from lots of different angles .
Okay , thank you , brian , for this nice talk . Cohen , great talking to you , okay .