Well, welcome to cloud talk, the CTO round table edition, whether you're a seasoned CTO, a curious developer, or an innocent technical bystander, this podcast brings insights on the shape of the future of our digital landscape by bringing together some of the best minds in technology today, who are experts with the products and services that your company likely relies on.
So let's get into today's episode where we'll delve into the topic of creating responsible AI solutions.
So you can't open LinkedIn, a news article, even the evening news, and not of course, hear about AI. Now, one of the things. We don't hear a lot about in the news is how to do AI well. And I don't even necessarily mean doing it well technically, but doing it well in a way that helps your company thrive and grow.
Stay safe. Uh, what I'm defining here very loosely, of course, is what we call at Rackspace responsible AI and here to have a conversation about that today, or a couple of names you will recognize. He's normal. Ranganathan is back and Rambis is back. The Swanathon is back, but a new face is here. And that is, is Joanne Flack.
Joanne, welcome to CloudTalk.
Thank you so much. I'm very pleased to be here. Um, thank you for the opportunity to speak with you today. I'm looking forward to, um, to our conversation. Thank you.
Now, now, folks, you've, you've been sort of baited into listening to this, thinking we've brought a bunch of CTOs only to the show.
We've brought a lawyer in today, not just to keep us out of trouble, but to give a perspective on what responsible AI really looks like holistically inside of an organization. Joanne, you were a big part of that with us here at Rackspace as we started that journey a year and many months ago, almost two years, year and a half ago, I guess I'd call it.
It's a little while ago now, um, but we, um, at Rackspace, um, thought it very important to make sure that we had a cross functional committee of experts around our AI initiatives so that we could take a holistic view across the business and really work together on, you know, A strong and robust strategy around responsible AI.
And I was very privileged to be the legal advisor to FAIR. Um, our internal spin up around AI and also around our own internal compliance efforts around AI. And it was a wonderful journey working with some of our best and brightest on such a great initiative.
So I have to ask when, when Srini Kaushik, our president of technology and AI and sustainability came knocking on, virtually knocking on your door and said, Hey, we're gonna, we're gonna do some AI here at Rackspace.
What was your first reaction?
I'm very excited. Um, you know, Rackspace, um, has some wonderful talent, um, internally and some great expertise. Um, And finding those white spaces where Rackspace is really positioned to add value for customers is something that's always really excited me. Um, it's always a journey when you're in a technology industry as a lawyer.
Um, and it's really fun to build and to grow. Um, and so, yeah, very excited to be involved in something where we have a lot of strength and depth. And so, uh, where we can really help customers on their journey to, um, make sure that we're best in class for what we're doing internally.
Excellent. Well, as we, as we started to think about, you know, what those first steps we were going to make, you know, normal, you were, you were right there with with Srini in on the technical side and very quickly, if not even before a lot of the technical stuff dug in, it was, let's, let's pause for a moment and really define.
What it is that, that we think from a guidance point of view here at Rackspace, how we should as Rackers utilize this technology? What are the boundaries we're going to put in place? Um, so, so a big, what came out of that, of course, was our definition of responsible AI. Nirmal, why don't you take us through that definition really quick?
Absolutely. And before we get into the definition, right, I just want to Briefly talk about what responsible AI is, and I think it means different things for different people, and there's different perspectives and different, um, elements that come into what a responsible AI policy means, or just responsible AI guidelines means.
And, um, end of the day, if you break it down, right, the first part as an individual, it's what's your personal values, right? Regarding just any other human and how do you, how do I take and look at AI from that perspective? If I look at an organization, it is, what is the organization's value? And how do I, how do I tie that back into some new technology that we're going to be introducing that in a lot of scenarios is going to act on behalf of employees.
And then the last part, which is where Joanne comes in is, what is the role of law say, right? Is there anything specific for our industry regulatory perspective? Uh, if not, still from a compliance perspective, are there certain things that we should abide and comply by? So really looking at from those different lenses and views is, um, what we came up with.
So our view of responsible AI is, uh, three elements. The first one is symbiotic. What that means is that we're looking at AI as a co worker. As an assistant. So really looking at how does AI coexist with us really more focused on augmenting and making us better at our jobs as opposed to AI replacing humans.
So that's the first element from a responsible AI principle perspective. The second is sustainable. So how do we ensure that we're using AI to make good decisions, but at the same time that those technologies are also accessible to everybody and also be leveraging that in a more Um, kind of a green manner, right?
So that way, whatever models are building or deploying has energy concentrations into place. So it's those two pieces, right? Sustainability from an equity perspective, sustainability from an environment perspective. Third element is secure and that goes without saying. Whatever we're going to do with AI involves data, security is key and utmost of importance for everybody.
And, um, again, what that means is it gets into, uh, managing privacy, confidentiality, preventing misuse of these AI models, all of those roll into AI. Secure so that that's our definition of responsibility as backspace. That's what we came up with. You look at others. They've got these different pillars and all of those align in some form or manner.
Some have six principles. Others have some principles, but end of the day. It's these core elements that makes up responsible AI.
You know, we hear a lot about how AI is being used in ways that, um, you know, are representing things, uh, content, uh, that, that's not somebody's own. Uh, here in technology, we think about that through the lens of, uh, of coding.
You know, we've got a bunch of professional services folks who write a lot of code. And, and where are the lines? When we think about responsible AI and how we're ethically using the tool to represent what we're doing. Rom, I'm curious to know, you know, where you, where you see that falling and then, you know, really a fast follow Joanna over to you as we think about that from a legal point of view.
So ethically, Rom, how, how, how should companies think about using AI in ways that make them a strong coworker, um, in, in a way that is, uh, that is ethical.
Nirmal covered the broad ground around how AI solutions need to be symbiotic. That is exactly what you're talking about. It's as a co worker and not as something that's going to replace you and secure and sustainable.
But if you drill down a little bit more, he covered one, touched on one important thing about the culture of an organization. It's important that we lay down. Broad guardrails are on how can AI be used for your job and how can we use AI for our clients. And one of the key things that you talked about is around coding.
Um, now there are people talk about, you can go to chat GPT and you can get your assignments done. You can even perhaps, um, done some significant amount of coding done. There are certain things that are available as part of the tool that will help you do your job better. but there are certain things where there is that line that you shouldn't cross in terms of copying others, um, proprietary work.
And this is where I feel, um, the personal ethics, the organizational ethics, plus the governance and the guardrails that we are going to put. becomes key into how we as professional services organization, we as consultants use AI effectively, but at the same time, not cross that line that would go into infringement of others proprietary assets.
And, uh, the guidance goes into three areas, right? We think of, um, of course, the organization's culture that we talked about, the people, The process and the technology at each level. We need to understand what kind of governance are we providing for the use of A. I. And for implementation of your A. I. With our clients.
And what are some of the guardrails? What are some of the checks and balances we have within our own organization and in our consulting practice that we take to our clients? All those becomes It's very, very important and, um, uh, needless to say, there are obviously, um, laws of the land that, um, we need to be, uh, looking into as well.
So, Joanne, how do we think about this, you know, from, from an ethical use case perspective? How do we, how do we define rules that create right boundaries, but maybe not necessarily hard walls? You know, what areas that we can, we can work inside of and have conversations when we, when we get close to the edge?
Yeah, I think, you know, as with adoption of any technology, it's really important to be very clear at the outset what the common pitfalls are from a legal perspective, from an ethical perspective, so that you can take a decision as to how you're going to implement administrative and technical controls to make sure that you stay safe.
Um, it's also important. Frankly, to make sure that you can receive the right return on investment on AI. And one way to do that actually is to make sure that the edges are clear, the guidance is clear so that you can dispel fear around technology. You can provide clarity. You can, um, green light. The right use cases, uh, you can remove obstacles and you can make sure that there isn't a fear of adoption of technology that is born in ignorance and, and by providing very strong, um, guidance, very clear guidance, making sure that there is a knowledgeable team available to provide additional assistance and clarity around, um, the, the right use and the wrong use of AI.
It's a much easier job to navigate that together and make sure that everybody has a good sense of what is right and wrong, and also that you're providing the right controls to make sure that nobody missteps. Um, and so part of the role of legal working with the business is to make sure. The pitfalls are clear that there are strong plans around it to make sure that people are advised and trained and they're the right technical controls around technology.
Um, and that way actually you can increase the return on investment on technology and make sure that people are not too fearful. Um, you know, there's a lot of the common Pitfalls in AI are pitfalls in a lot of business. It's just on steroids because you have involvement of, um, something that if you don't have the technical expertise might be a black box.
Um, and it's making sure that you can get the right technical expertise around a solution that understand what's in, what would otherwise be the black box and that you have the right, the right assistance wrapped around that to make sure that That you're receiving all the benefits from the technology, but avoiding the majority of the risk.
Um, and that's why a strong partnership between the legal and technical teams, um, and also the right governance in place is really quite key.
I love what you just said there. And that is it's, it's technology, just like any other technology, but it's impacting organizations on steroids, uh, in a very, very fast way.
A couple of points I would add to what we just heard. Um, the whole governance guardrails should not be an afterthought. It should be part of building the AI solutions. In fact, one of the things that we talk about is, um, don't build the solution and then think of the guardrails. Think of the guardrails and the AI.
Then, uh, you start building this. That's one. And then the whole, um, governance and, um, what are some of the transnational, trans border implications, uh, around AI, its implications are still evolving in terms of governmental regulations and bodies. So, it's important that we set our own house right. And at the same time, ensure what's happening around us in the ecosystem as well, and ensure, um, we are in sync with that as well as important.
Yeah. Another thing I want to add, we're just giving a perspective of why we are even talking about responsible use of anything. Right, and tech, that's oftentimes the last thing that comes to people's mind. Uh, if you look again, a few years back, a few decades back, right, client server and all of this came in, nobody was talking about responsible use of, well, there was responsible use of the internet, but in general, other technologies, right?
Nobody was talking about that. And why People are putting and everybody should put a lot more focus on the responsibilities of AI is AI is the only technology so far that can come close to acting and behaving and responding like humans. And that is why it is important. So, Often times people prefer to kind of understand truly.
Why do we need to do this? And why are we putting these controls on ourselves? It's. Purely because of that, right? So we have laws to govern humans, uh, for the most part, uh, we have bad actors. Always, uh, the same is the case for AI, right? So we're going to have some laws, but the laws are still forthcoming, so it is incumbent for everybody to sort of realize that and act ahead as opposed to trying to deal with the after effects, right?
We don't want to skynet on our hands. So, yeah.
Yeah, such a
good way
to call that out. And of course, you call out bad actors. And that makes me think, yes, we think about defining responsible AI in those three buckets. One of those buckets was security. How can companies ensure Nirmal that their AI systems are secure as, as they can be?
What are the, maybe we think about it from layers. Maybe you think about it from interaction. How, how do you help coach companies in this space?
Absolutely. There's so, so many elements, uh, when we think about security. So I think the first layer is how we apply security for anything and everything within the organization applies for AI as well.
So that is perimeter security, making sure proper security training is provided for employees and everything from preventing phishing acts to scams to social engineering attacks as well. Uh, what AI exponentially increase those. Standard attacks. So we've got to be more prepared to handle those.
Everything is on steroids.
Everything is on
steroids. Exactly. And, um, there's also new threats that come about, right? So AI specifically, the threats could come in various forms. All standard ones exist. None of that changes, right? They're now on steroids. And AI perspective, depending on what systems you're using and interacting with, Uh, you shouldn't just trust every AI system and model out there because you don't know, for one, what data it was trained on, uh, what might look like a good response for something that you're looking at may not actually be.
So putting that blind faith is something that needs to be considered, but from an organization standpoint, a lot of that's going to come in choosing the right models that you use. For integrating applications, ensuring that again, based on the needs, if it's internal organization based, then all your responses are grounded in data about the company.
You're not introducing factually incorrect information. And the models will do that all day, right? You've provided something and it'll just generate stuff out of it. It doesn't know what's right and what's wrong. So we know much more what's right and what's wrong. So those are, again, at a very basic level, security challenges that every organization is going to face, right?
It doesn't matter. You don't have to build a single AI application. You're going to face that. Because Every tool out there is going to have AI capabilities.
It's really interesting when you're, you're, you're kind of changing the way we look at responsible AI. I mean, it's not just how do we develop responsible AI, but how do we use AI responsibly?
Because that's
going to be the majority. That's going to be the majority of the people.
That's a great point. So don't, don't trust, don't, don't blunt, no blind faith here. Blind faith. When you go to your, your bank account, you know, your bank. com and go check your balance. Pretty blind faith there may not be happy about the answer, but blind faith.
But with AI that you, you said it, you said a couple of things really well in there, but you always say stuff very well, but really caught my ear. One was of course, how we, we were returning the tables on responsible, but the other piece is. That, um, it has one job to generate content, right? Or wrong. It's going to do its job.
Yeah. And that's the basic level, right? If I get into more technical details, then we're talking about different types of attacks, right? Injecting injection attacks. We talked about SQL injection for decades.
Yeah.
There's a AI aspects of AI injection, uh, now, right. And similarly have man in the middle attacks and, uh, Model poisoning attacks, and those happen when your, uh, data that you're using to train the model is being tampered with.
And, uh, that could cause, again, a model that's developed that is inaccurate or poisoned in a particular manner. There is attacks on, uh, being able to extract information from models, especially if it's something that you have, uh, externally facing available to customers or anybody can use in a free form.
Then that, that becomes a challenge. Safety is another big concern on the responses that are going to come out, who's going to use that, how they're going to use that information. So having the safety and guardrails. Uh, become key. So it is touching every aspect of, uh, security and beyond as well.
Security on steroids.
Um, this starts to play into Joe, as we think about if there are security issues, then that can inadvertently drive us into some compliance challenges as well.
You certainly can. Yeah. I mean, I always say, like, there's no, there's no privacy without security. Um, and so security becomes such a fundamental baseline for any part of compliance.
And so it's incredibly important that, you know, You understand the model on the vulnerability of the systems that you're using and that you do engage your security team, um, properly assess both the intrinsic technology and how it's being used. Um, and you'll see that comes through a lot. In the new laws that are being passed and a lot of the existing guidance that's been around for decades around AI, although some of it, you know, not not at the forefront and not being used as much as it is now.
It's all risk based approach. And so, you know, a lot of this comes back to understanding what you're doing in the system and what the vulnerabilities are and how you're mitigating those, how you're wrapping around those, because, um, you know, even the legislation now is, is very risk driven and the expectation is, um, you know, to the points that have been made, there's no kind of willful blindness here, or, you know, just ignorance around the system, you're expected to go in and understand what a system is doing and how it's doing it, um, and to be able to, to mitigate the, um, the inherent vulnerabilities there and to use systems appropriately, um, what might be an appropriate use of one system will not be an appropriate use for another.
But security absolutely is a, is a huge baseline there. And so having secure systems, especially if you're using them with sensitive information is so important. It's important legally, but it's also just important from a commercial perspective. Um, and with that, you know, we can't ensure the privacy of anything in an AI system unless we have that baseline security in place.
And so it becomes such an important, uh, cornerstone of privacy, compliance, and the legal compliance around AI legislation that we're seeing coming through, which is very risk based. approach to make sure that the proper assist, you know, assessments are being done on systems and that we really understand what the risks are there and that they're being used appropriately and the risk is being managed.
So yeah, it's a very, very good point around security being a, uh, a fundamental, um, point.
I have a comment and a question for Joanne. So my go to and you're absolutely right on the risk based model, right? My go to is the EU risk model. Simple, straightforward, easy for everybody to understand, right? First layer is minimal risk systems. Again, if you're using an internal system to go search your knowledge base, right?
Minimal risk. Limited risk is when there's going to be some element of transparency, accounting responsibilities, but still internal systems. High risk is The next layer up is high risk, where you're starting to implement AI models that impacts people outside the organization, impacts people in their day to day, part of even decision making.
And then you have the highest layer, which is unacceptable risk. And that's something that when a particular edition of my model could be a threat to someone, right? That life or threat to, um, yeah, any other factor of life, but, uh, that's a very simple model to sort of understand and break down. And if we can classify all our use of AI into those categories, this is a great starting point, right?
I'm not saying that's the true answer, but a great starting point. Joanne, I wonder if there's other similar models, or is the EU risk model sort of becoming a global standard?
It will become a global standard and this is very much what we see when we have like the strong legislating Position of the eu that tends to legislate quickly and at the forefront, but it's also based on the fact that the eu Have derived their guidance and their legislation based upon The existing baseline of guidance that was already there prior to uh prior to the european legislation And so for a long time now there have been There has been guidance out there, um, you know, specifying certain uses of AI that are particularly high risk.
Now the EU, you know, puts a European lens on that. And so we talk about, you know, risk to fundamental rights. Um, and that's, uh, um, you know, that's a very European lens, but it's not too different to an American constitutional lens, frankly. Um, And there's great consensus on what uses and outcomes from the use of AI are unacceptable.
Um, those have been around for decades. And so I can see the European approach being adopted very broadly, not only because it makes a lot of sense to piggyback off that prevailing legislation, but because it's based on principles that have been around for a long time in the AI community. around, um, you know, what's an acceptable outcome.
And if you're making decisions that affect people's fundamental rights, their privacy, their right to employment, their right to own property, um, you know, these are elements that people take very seriously and are more likely to legislate around. And in the U S for instance, we're seeing a more patchwork approach statewide currently.
But again, the areas that people are legislating heavily on are areas where, you know, I make a determination about how you're employed or your availability to have credit and own property. Um, these are types of areas where the decision making being with, um, an AI system or an algorithm is not deemed appropriate.
And those are the areas we'll always see the strongest legislation. So, so yeah, I think it, It's indicative of the approach that a lot of people are going to take, whether they adopt that in legislation or in guidance.
Add to that, um, Nirmal, EU for some reason, uh, seems to be doing these things, uh, ahead of, uh, others, uh, in the world.
Uh, GDPR, I think that's what she was alluding to earlier, um, which was when it first came from Europe, right, has now become a, de facto standard or every geography around the world is forming some form of GDPR more or less along the lines of GDPR. But one thing that I do, uh, did want to touch upon is, um, Jeff, you talked about guardrails and, and Nirmal briefly touched upon the data.
We all know, um, the AI systems or the models are as good as the data it's trained on. And we have seen some examples of that. In, um, some of the popular tools that are out in the market space, um, the data that is there with, uh, the systems or the organizations, public or private, has some amount of human bias in built in it.
So if you're going to train your systems with that kind of data, um, the AI response will also reflect that kind of a human bias. So that's where, while, uh, we need to be cautious of the data with which we train, on top of that, it is also important that we put a certain amount of guardrails that are relevant to the times and the laws of today, and not of, uh, the past.
Yeah, and normally, you and I had a chat about a month or so ago, and, um, And you were talking about if we were to ask a question of, of AI, any of the popular models that are out and about in the world, and ask a question about maybe how civilization evolved in the past 100 or 150 years, uh, in, uh, in, in Middle East or in, in any other part of the world that is a non, what we might consider a non Western world, the viewpoint is going to be very colonial based.
And maybe you can touch on that for a second, because I think it's a, it's a powerful statement as we think about, even think about ethics. In the, in the context of, of AI, especially as we start to think about how we're going to train the models that are going to run our businesses.
Yeah, so fortunately, that is to me, the most scariest view on AI, because that's not going to get regulated, right?
That doesn't fall under people's purview of, okay, that needs to be regulated, primarily because everybody has different opinions and views on it. And the biggest challenge is. We as a human species have, again, through various means and mechanisms enforced ideology and thinking in a particular manner.
Again, those were all based on the level of physical. And so thought influence we provided is going to just do that multiple full right like hundreds. It's going to do that on steroids,
which is that's our phrase for the
goes back on steroids and there is, uh, there's no true controls over that because we don't know how do we kind of manage that right?
A lot of it is absolutely going to come down to the quality of the models and what data you train it on. But then who's deciding what data to write down? We're deciding what data to train on end of the day. Right. I'll have a different view of the data. You're going to have a different view of the data.
So it's, it's a really hard problem. And to me, that's the most standard proposition. The other ones, regulations will cover, right? Skynet, I'm sure regulations will cover and we'll, we'll be okay. I can't
shoot people. AI can't, yeah, we can't, yeah. The biggies, the big, easy ones.
Yeah. Yeah.
Well, but I, I bring that up because it's important that, that, well, those are our gross and large challenges that the world will have to figure out how we're going to grapple with again, back inside of our own organizations.
When we. We have these, these arguments, these conflicts from one group to another, which data is authoritative. What is the truth? Because you need the truth in the data to train the model to ROM's point, because that's, what's going to drive the outcome. All kinds of things to think about. All right. Before we get too far down the, the, the, the crazy road, let's look forward for just a minute as we're starting to run a little bit short on time.
Um, AI is, has more opportunity. Then we can even imagine really at this point, which is why we take on these big challenges, which is why we grapple with ethics and AI, um, and, and compliance and regulations, because it is an important tool, maybe one of the most important tools, the, the human species has ever had to, uh, at their command.
So what do you look forward to each of you? Here's the, it's a round robin question. What are you looking forward to either? Could be in your own little world, could be in your business, could be in your, your sphere, whether that's technology, legal, whatever, what it, what benefits to the world are you looking forward to seeing AI being a part of the solution?
Uh, Nirmal, you're up first, Joe, your second.
That's a really hard one. I know,
which is why I'm asking the question and you have to answer it.
There's just so many uses and implications. Um, I think the, I know it's really hard to rank which one falls first, right? And maybe the first one is, um, Yeah, everybody gets a top three.
Everybody gets a top three. Okay, good. That's, that's, that works better, right? So I'll go with the personal one first. Personal one is organizing all my content, right? So how do I get on my fingertips everything that I need? Both my past history and sort of what I should be doing just day to day stuff. So Google, I think, just recently announced updates to Google Photos.
So being able to just go look for that. I know there's a privacy concern, huge privacy concern in there. And that's where companies need to act responsibly to make sure those privacy concerns are handled. A company like Google is obviously doing that. So as we build these solutions, making sure we apply those responsible policies, And then it is good and applicable for everybody to use it.
So that's the on the personal front. I think on the, uh, on the general front, where it can get, uh, get applied in healthcare and medicine. I think that is, that is key because that is going to help a lot of people across the globe. And, uh, that's everything from remote telemedicine to, uh, surgery, to being able to more in advance, like predict.
Uh, various illnesses and ailments and also helps in that drug discovery. So just across the board, that is going to be another key element. And then I think the last part is, and this is how humans have obviously grown as well, right, is how do we become more productive? And I think that's, that's what everybody's looking at right now is focused on the productivity view.
And which is, which is really neat, right? Because if I can just get rid of my mundane tasks, have AI do that for me, I can start thinking about the larger problems to solve.
Yeah. Joe, those are good ones, by the way.
I probably had, I might, might come up with three, but I definitely have to. Um, the area I'm most excited about, um, I, my undergraduate degree was in biomedical science.
And so that still is a huge passion of mine. And one of the reasons I got into technology is I saw it as a force, force multiplier for good. You know, when you apply, um, when you apply powerful technology to difficult problems, um, some of those being in the biomedical science, medical research space, um, then.
Then they have the ability to solve, um, solve the things that are at a magnitude that we could only dream of just using and applying kind of human, human efforts. Um, and so I really love the work that's going on, um, in AI now, um, around medical research. So, um, there are folks using AI to map, uh, the neuronal, Uh, connections of the brain, um, in a, in a, in a time period that would just not be possible.
Absent technology, um, drug discovery, pharmacokinetics, um, uh, those types of areas of making huge leaps and bounds beyond some of the traditional ways of. Developing medicine remedies, um, that would just not be possible without the strength of technology. And that's incredibly exciting for what we can analyze and discover using the strength of such powerful technology.
And my second area really comes back to the legal field and so, um, there's a huge democratization of data that happens with this type of technology, which really is, uh, is it has the ability to be a real equitable level playing field for people? Um, and it has the power to bring a lot of equity. Uh, to a huge amount of areas, including in the legal field.
And so we can get, you know, accurate information into the hands of our clients. We can spread knowledge. We have the ability to drive out bias analysis and democratize what has become a huge swathe of data for good. Um, that has implications in the legal field, but just more generally too. And so the democratization of data, I think, is incredibly exciting.
That's awesome.
Ram, you're up. Yep. Um, both, uh, Nirmal and Joanne have already covered the couple of things that I had in my mind, um, particularly around democratization of, uh, uh, every technology that has come about that I've been part of in the last, uh, three plus decades has continuously made me. Yep. Um, more information, more knowledge, more of everything available to, um, more of the people around the world.
Um, I'm looking forward to AI helping in poverty alleviation, in, uh, helping people, um, Get their basic needs, uh, what, um, uh, we call in Hindi as roti, kapra or makan. That is your bread, butter and a shelter and dress. Um, if it can make it available, that's my utopian dream around, um, around any new technology and particularly AI.
Having said that, I, I also know how humans have behaved. with the technologies. With the technologies, walls seem to be coming down, wall seems to be coming up rather than coming down. Take the nuclear technology that, uh, what it has done to the world, to, um, the global politics in the last, um, uh, six decades or so.
Um, but with AI, Um, being positive, I'm looking at, uh, AI technologies as democratizing more of many of the wealth that is around the world in a more equitable way. That's number one. And if it can also help in propagating peace or advocate peace, um, to the world, um, which is as is driven by various. Uh, factors today.
Um, those are my utopian thoughts around ai.
I love it from how do I find my pictures faster to world peace? I mean, we've, we've got it all covered. You know, uh, for me it's, it, uh, you know, I echo with, with normal in that is personal productivity. Um, I've actually started in my notes for certain workflows and I use a lot of work.
Uh, AI in the production of a lot of the media stuff that we do is I'm starting to write down the different tools and what order do I need to use them to feed them from one to the other, to the other. And that feels somewhat complex, but the amount of work that's getting done in those, those few tools is astounding, astounding.
So I'm, I'm anxious to see how all this starts to pull together into more cohesive things, really to the end point, normal, how are we creating better? And then, uh, Um, now that we'll have more time on our hands, what good are we going to do at that
time?
Transcript source: Provided by creator in RSS feed: download file