Hey, everyone. Welcome to non trivial. I'm your host, Sean McClure. AI is getting impressive, but being impressed is not an objective measure of progress. For a technology to truly bear the mark of innovation, it must enable us to create new things. I argue that few people are creating anything new with AI rather than just showcasing what it can do out of the box.
I'll look at Chat TPT as an example and put forward how I think AI should be used as a starting point for building something AI cannot do on its own. Let's get started. So AI artificial intelligence AI is getting pretty impressive. I think most people would agree with that. You know, ever since kind of 2011 2012, we had deep learning, beating out some of those ImageNet competitions, basically showing that things like facial recognition could be done by the machine.
It was kind of saying, look, we're past those AI winters of the. We're proving now that artificial intelligence as a technology, as a capability, is able to do some pretty good stuff. And then just last year, with the release of Chat GPT, most of you are probably familiar with this, this tool where you can essentially get into conversations with the AI. And it can produce things.
It can write its own software, it can hold philosophical debates, it can summarize text, it can come up with poems and stories. And if you use some external libraries with it, you can create digital art and all this kind of stuff. Basically, you just prompt the machine and it will produce things that we thought very recently, only humans could do. Right? So there's no doubt that artificial intelligence is getting pretty impressive.
We can debate whether or not it's passing that so called Turing test and things like that, but it is able to create, it is able to produce things that we thought only humans would be able to do, or at least for the foreseeable future. Even the most technologically minded of us still thought maybe some of these outputs that AI is producing today were a bit far off. Artificial intelligence can create things arguably as well as what a lot of humans can do.
And that's not to say that there isn't still a difference, but it's definitely impressive. And it's a testament to really this fundamentally different computing paradigm that people started on when they started to move away from the more traditional approaches of computing and the traditional approaches of statistics, where you're trying to use these kind of basic, almost simplistic approaches to produce intelligent results in a machine.
So I've used this example before, like with facial recognition. If you tried to create a piece of software using the traditional paradigm of computing, which is to explicitly instruct the machine with rules, you would never be able to achieve something like facial recognition. Whatever it is that's going on in the human mind that allows us to recognize faces is just far too complex than to be understood as a sum of rules, right? It seems to be more than the sum. It's very high dimensional.
And so when big data and data science and all this really kind of came to the forefront in 2012, it said, like, look, let's just use a lot of data and let's train these deep models, and let's run. It through millions and millions and millions of times until the parameters of those models settle on something that is able to produce the output that we want. This is essentially how deep learning works. A very high level description of what deep learning is doing, right?
In other words, it's not going in and explicitly programming the machine to produce the outputs. It's admitting that we don't know how those outputs get produced, but saying if we have enough data and we pass it through enough times and we have enough parameters that can be set, it can keep making guesses guesses in a trial and error fashion until it converges on something.
And there is no research or engineer today that can really peel back the layers of deep learning and know exactly how it works. Which is true. We don't because it's not a fully deterministic machine like that. Not in the training phase anyways. It's taking many, many trials to converge on something. And how exactly that convergence happens, no one really knows.
We can design the architectures, we can put the scaffolding in place, but the end of the day, it's just a bunch of millions of iterations and parameter and the settings of parameters until we get that output and then you can deliver that as a model that does what we want. So anyways, we're kind of bridging on this level of genuine complexity in the engineering that we're doing now.
And so as a testament to that, to that new paradigm of computing that getting away from kind of the old guard of rules based and simple kind of linear regression style statistics and saying, no, we need to embrace something more complex, more opaque. There has to be a black boxness to it made a lot of people uncomfortable because it's kind of like alchemy to some people because nobody really knows how these outputs are being produced.
And of course that scares some people because we don't really know what anyway. We know that whole story. But the point is, by embracing this new paradigm, we now have a technology that is competing with humans when it comes to everything from creating music, to producing art, to summarizing text, poetry, writing stories, producing software. It's a whole new type of automation and it is undoubtedly impressive. But being impressed is not really an objective measure of progress, right?
I mean, in some sense we're impressed for a reason, right? But being impressed by something is not necessarily the same thing as saying we're doing what we need to be doing with AI that we're creating stuff of value. If you think about in some sense, it's easy for people to impress others because they can.
Well, let's say using a human example, let's say someone uses a bunch of jargon or something in their talking, and because it's jargon, maybe you don't really understand all those words, but you're kind of impressed by it. Maybe because they seem like they're erudite, right? Maybe they're well read. It seems like they know what they're talking about. I mean, if they have all that jargon and yet it could be complete BS, right?
It could be someone who just uses fancy words, uses that kind of rhetorical flourish in the way that they speak. And so it shows us that being impressed kind of has as much to do with us as it does with the person delivering the outputs. In other words, we make all these assumptions about what we're seeing and what we're hearing, and we might feel like we're looking at something kind of authoritative or something intelligent, but it might not be. Maybe in some sense we're kind of being fooled.
Maybe our ignorance about the situation means that what is being deliberate is not truly progressive. Maybe it's not truly maybe it shouldn't really impress us, but we're kind of being impressed by our ignorance, if that makes sense. So just being impressed by things, I think about people going on social media and saying, like, look, I used this deep learning tool. Maybe it was Chat GPT, maybe it was something else to produce this new digital art, okay?
And then we look at that artwork and we say, well, isn't that great? That's a really cool I mean, it looks like art. It looks interesting. I like the picture that I'm seeing, and so I'm impressed by it. And so that's impressive. But why are we impressed by that? Is it impressive? And what is the objective measure of a good piece of artwork? Do I need someone who's a professional in art to weigh in on that and say whether or not that's good?
You kind of have to ask yourself, why are you impressed? Or if you take the summary of a document, let's say, okay, we can go use Chat GPT and we can summarize this legal document within like 10 seconds, and then you go read the summary, and then you're like, yeah, that sounds like something a human would write. That's a good summary. Although I'm saying it's a good summary. But then again, I didn't exactly pour over the legal document myself, so I don't really know if that's a good summary.
Maybe it's a poem or maybe it's a story. Chachi BT came up with a story, and then you read the story and you're like, yeah, it sounds like a story, and it sounds like something that somebody like a human could write. And so I guess that's impressive.
And it is impressive because it's kind of fooling us, I guess you could say, or it is arguably a genuine type of creativity, because if you can put words together that make sense and they seem to follow some kind of narrative structure, then that's what humans are doing anyway. And so that's impressive. And I think from a computer science standpoint, it is impressive.
The fact that you were able to get a machine to do this, to create these outputs that we thought only humans could do, of course that is impressive as an engineering feat, right? That's impressive. But in terms of the art or the thing that is being created, the construct that has been assembled, is that impressive? What makes a good story a good story? Maybe we need a few hundred years to pass to know whether a story is actually good or not.
Maybe someone comes out with a novel and it gets popular, and then three weeks later, nobody's talking about it anymore. Well, was that a good novel or not? Maybe only through standing the test of time can we really or can history. People beyond our generation can actually look back and say, okay, yeah, that actually was a good story. That was good. What are we being impressed by? Is it really that hard to be impressed? Is it really that hard to impress people?
I can throw a bunch of jargon at you and speak fluently and blah, blah, blah, but are you impressed? Why? You kind of have to pick that apart a little bit. I'd argue that just being impressed is not really the objective measure of progress. Right. Something else kind of has to be there, whether that's survivability through the passage of time and only history can really judge whether what was created was worthwhile. Or maybe it's the thing's utility, right?
Yeah, you can put stories and poems together, but does it get used? You know what I mean? Does that improve the quality of someone's life? Does it get injected into our economy somehow to demonstrably showcase value? What the h*** does it mean to be impressed by something? Impressed is not really arguably that good a measure of progress. So I said at the beginning, AI is getting impressive in terms of what it can produce.
True. And it is definitely a genuine engineering feat that has been accomplished here. We've embraced this kind of new paradigm of computing that uses large data sets and millions of iterations and these really deep architected models that kind of converge on these parameters that end up in ways that we don't really know, spitting out the outputs that matter. And that is an accomplishment.
And for many reasons that I've talked before, I think that is the correct direction that building or engineering needs to go in. So that is impressive. But if we're stepping back and saying, are we doing what we're supposed to be doing with this AI? Are we creating things? Are we delivering value? Is this progress? It's technological progress in some sense. But it's not enough to just be impressed. It's not enough to look at some summarized text or a story or a poem that's done by a machine.
Because the question is, well, what do we do with this? You have to do something with it in order for it to really be progress, I would argue. Because at the end of the day, a human can still write a poem, a human can still write a story, a human can summarize legal text, a human can make art, a human can compose music. Now they'll do it a lot slower, right? They'll be much more slow than the much slower than the machine.
And we might just kind of at face value, say, well, that's the progress, because now we're more efficient. But, you know, then we get into these problems of defining efficiency, right? I mean, efficiency, it's always been problematic to define that in terms of just kind of these strict checks and balances, like the time that you put in to achieve the same output. And if you reduce the time to achieve the same output, therefore the efficiency has increased. But is that really efficiency?
And are the outputs being compared really equal? You get into these conversations about, well, how many hours did you log at work? And that's all that matters. But of course that's not what matters. We talk about bringing back kind of the analog aspects of our workforce, I think, COVID we saw just how important it is to have these organic conversations, right? The Zoom meeting isn't really the same.
You can't just have a bunch of remote workers interacting on a transactional basis with their conversations and assuming the good output. We realized that maybe the long commute to work was important, maybe the water cooler conversations were important. The organic, the messy, analog kind of interactions that humans do, that we evolved for, are critical to the quote unquote efficiency of work. You get your best ideas in the shower, you get your best ideas when you go for the walks.
When you're not working, it's critical to have the not working parts of your life to do the working parts of your life. So, anyway, AI is impressive. It's a technological feat. It produces outputs that you could look at face value and say, yep, yep, that's cool, that's good, that's definitely something has been achieved here. But being impressed is not enough to really state progress, because what are we doing with it?
When we look at these outputs that AI produces, it's still the same things that humans produce. So what's the progress? You see what I'm saying? I get that it's faster in kind of a dumb definition of efficiency. It's progress, I guess. But the reason why we would want to be more efficient is to create something new, right? Not just to be faster. If the law firm can now summarize legal documents more quickly, sure.
But now what what do you do with your new speed if you replace the slide rule with the digital calculator? It's not to say that I can do the same calculations faster. It's to say, now I can do new calculations because you've freed up your time to do something new. There has to be something new there. And when we look at the outputs that AI are producing, I'm kind of arguing that we're not seeing a lot of new. We're seeing faster, we're seeing automated, but not necessarily new.
And so what I want to kind of argue in this episode is we're kind of getting bewitched by AI, right? There's a difference between showcasing what AI can just do out of the box and actually building something with it that is new. Whatever we do from a technological standpoint, if it's truly innovative, we should be able to use it as a tool to produce something that has yet to exist, that doesn't yet exist. It's not enough. I would argue that not enough people are actually creating with AI.
They're just kind of showcasing the outputs. Look what it can do. And at the beginning, that makes mean, you know, chat GPT just came out like last year. I understand, but we get it. We get it. We get that it can make its own software. We get that it can compose music and we can listen to it and yes, it sounds good, and we get that it can summarize text and it can produce stories and poems. Okay? But so can people. These are all things that people can do it. So where's the innovation?
We have to create new things. If you are being gifted with a piece of technology that can automate something that before was not automated, the onus is on you to build something new with it. That's the point. Not to just do the same thing that humans were doing, but a little bit quicker. Right. That's not the point of a new tool. And I think that's how we should think of artificial intelligence as a tool, right?
Until the day comes when the robots get their own rights, AI is a tool and we're supposed to use tools to create new things. Yes, AI is impressive as an engineering feat, but being impressed is not really a sign of progress. I would argue that not enough people are actually creating with AI. You're not creating with AI to just produce outputs. And I understand, okay, you look at someone's digital art and you say, well, I've never seen that image before.
Sometimes they have this really deep detail or fractality because only the computer could create an image like that and that's new. But you could also argue that a human could do that. It would take them a long time, but a human could do that. So just because you produced an image that nobody has seen before is not really the same thing as saying it's new. Okay, it now you might say, well, wait a second. Isn't that what artists do all the time anyways, though?
Like, if somebody comes up with a new painting and then they show you the painting, and I've never seen that before, am I going to say, well, a bunch of other people could do that too? So it's not new. But you have to understand that the artist before AI, or even any artist today, not using AI is not using a tool that is automating something.
They are taking it upon themselves with the tools, the paintbrush, the palette, the canvas to produce something that those tools don't produce as an output by themselves. The paintbrush does not create the painting. The canvas doesn't create the painting. The original synthesis arrived at by the human, by the creator, is done solely by the painter. And that's what makes it new.
But if you now come up with a piece of technology and choose to embrace it, if you're an artist and you don't want to embrace AI, don't do it. So your paintings are new, but if you are embracing the AI tool and now you're saying, look, I can also produce, let's say, a painting, quote unquote painting a picture digital art that nobody has seen before. Okay, that's true. But you didn't really do that, right? You prompted a tool to do that.
The tool automated the things that the manual painter was doing. And so it's different. In other words, what I'm saying is, if you embrace AI as a tool, you should be doing something more than just the painting, more than just the digital art with it. You should be taking the next step. Otherwise, I would argue it's not really creating. You're just showing what the tool can do. Now, I understand it's not plug and play.
It's not like you just press a button on Chat GPT or stable diffusion or one of these tools and just produce the artwork. You have to get good at what they call prompt engineering, right? This is the new term because you have to prompt the machine. And I don't doubt that there's a whole art to coming up with good prompts right? There's people who have literally written books on prompt engineering.
Now, we could debate that a little bit, but at the end of the day, you're still just kind of lowering yourself to the level of the machine and then kind of coming up with the way it needs to hear, quote unquote, something or understand something in order for it to create the output. Unless you want to say what your art is, is prompt engineering, then I don't think what you're saying or what somebody producing the digital art is really saying is that, well, I'm an artist. I'm a digital artist.
Look what I created. Right now, if you were only using tools, let's say traditional software that didn't automate the blending together of images like that, then I would say, okay, you did create that. But if you're using a tool that automates the output itself, then I think you should take the next step. Okay, if you are a law firm and you can now summarize legal documents, I don't think you should just do the same business as usual, but quicker one.
It might not work out that well because maybe spending the time summarizing the legal document was where a lot of the value came from. Right? Again, these dumb definitions of efficiency that people get into where, well, it's just the time it took to summarize the text. If you didn't create the summary, then you're going to be missing something critical, a summary. It's not about just having the summary. It was the journey to get there.
It was the effort that you put in to compress the information that has given you something, I would argue that cannot be captured with words, some ineffable quality, but really, really critical information that was kind of imprinted because of the struggle to summarize the text that you would then go use as an effective lawyer. Okay, so if you want to use AI to summarize text, okay, but don't just summarize text. Don't just use the raw output. Now you got to create something new.
Now you got to create a different type of law firm. You have to take the next step. You have to decide what your next level of compression is. Right? If you were compressing the big documents down to a summary and that struggle gave you what you needed, but now that's been automated, okay, what's the next step in compression? What is the next struggle? Because now that's been automated. So if you're a digital artist and you produce a painting with AI, you don't have the same struggle.
I get that there's some prompt engineering, but you don't have the same struggle anymore to create the human level synthesis that makes something a new painting. So what's next? Okay, so if you use an example, so Chat GBT again is this tool that came out and you go online and you see people showcasing these things and look what it can do. And here's the poetry and here's the story. Or tools like stable Diffusion, which is another deep learning tool.
But leveraging these kind of large language models on the natural language processing side of things and these new kind of architectural approaches, basically, leveraging the transformer, which is kind of today's leading architecture for deep learning, won't get into the details, but it just uses these specific attention mechanisms to really take AI into this kind of new revolution. You could argue that really just started last year. It's very realistic.
The things it can do, like these transformer architectures have really brought us to the forefront of AI is able to produce outputs that we thought only humans, at least for the foreseeable future, were going to be able to do. But you look at all these things that people are showing off with chat, GPT, and it's like, okay, I get it. But those are just the outputs. It's not really you. You have to take the next step. If you don't use AI, you've already taken the next step.
A painter is already taking the next step if they just have a brush and a palette and some colors and a canvas, because those things alone don't create the synthesis the human does. But if you're using a tool that creates that painting as the synthesis, that's your starting point. That's your starting point. So don't just show me the digital art. What can you do with the digital art? Can you blend a bunch of digital art together into an animated scene? I don't know.
I'm making stuff up right now because I'm not a digital artist. But what would be the next step? What would be the thing that AI cannot do? And this is the point, because as we enter this new realm, this new day and age of AI, as we always have, we're going to have to decide what our skills should be for the future. What differentiates us? What is the thing that the human can bring to the table?
Well, it's not just pressing the button or doing a bit of prompt engineering to produce what AI does out of the box. I get it's not exactly out of the box, but it kind of is. AI. Will produce the art. It's not really you. AI. Produces the art. AI. Will summarize the text. AI. Can now write a story. So if you don't use AI and you want to write a story, you created something new. But if you want to use AI to write a story, it can't just be a story.
It's either got to be a different kind of story, or it's got to be something else. I don't know what that is, but that's how we have to differentiate ourselves. Maybe we're going to usher in an era where the things that we create aren't poems and stories and music. Maybe it's different categories of things, or maybe it's still those things, but they're very different. I mean, they have to be.
If humanity landed people on the moon using the slide rule, and now you hand them the digital calculator, well, you got to go farther than the moon or whatever, right? You got to do something different. You can't just do what the slide rule was doing, but faster. You have to have a new synthesis, and that's what humans are great at. And I think humans can still far surpass the abilities of AI, but not with poems and stories and art, right?
If you want to embrace AI as a tool, it's going to have to be some other version of that. Maybe it still is art. Maybe it's just far more abstract. I don't know what it is. Or maybe you blend different pieces of art together. And this is kind of getting into what I think is the solution here. How do we do something new? Right? Because new has always been the poems and the stories and the summaries.
That was the human thing we created, the synthesis that the machine just kind of automated low level menial tasks. But now the machine is starting to take those things over and over. This last year, we've seen that that is the case. That's not hype, right? And I'm not saying the stories that Chat GPT comes up with are definitely as good as humans, but arguably, for the most part, they are. They are. And I think that level of human creativity is not as mysterious as we like to make it.
I think it's a bunch of trial and error. It's working with words, and you've got your own kind of probabilistic machine inside your head that comes up with the next most likely word. And of course, we iterate, and we polish, and then we come up with a story, and it's not really that mysterious. It's great, it's interesting, and some people are better at it than others.
But creativity is not as mysterious as it is just this willingness to persevere under trial and error until the solution converges. And that's what deep learning is doing. It will persevere longer and faster than we can. It will take trial and error to arguably a higher scale than we can or operate it at a higher scale, and it will produce creative outputs. That is creativity. Okay? But that doesn't mean humans can't differentiate themselves.
That doesn't mean there's no delta between the machine and the human. But the delta is not going to be found in a run of the mill poem or a story. You have to do something different. It's got to be something new. So what I think, and I don't know exactly know what those new things are. I can try to give some examples. Like, again, maybe the law firm is able to take the automated summaries and then do something with those summaries to kind of run a law firm slightly differently. I don't know.
Maybe the digital artist can blend. It takes the digital art as an output. That's its new starting point. That's not the creativity anymore. That's just the starting point, just the way the colors on your palette are the starting point to a painting. And now you're blending that into something new. What is that new thing? I can kind of give those high level, hand wavy ideas because I don't think any of us know what the new thing is yet.
Although I'm sure some people out there are creating some genuinely new stuff with AI, right? But that's where we need to head. So as a solution, I think what we need to do is to see AI as a piece, not the whole, right? The AI is the new starting point. It's the piece. It's the painting on your palette, and you have different colors on that palette, which are the different outputs that AI. Can produce. AI. Can write poems. That's a starting point. AI. Can write stories.
That's your starting point. AI. Can produce digital art. That's a starting it's not really you producing it. That's the AI. That's your starting point. We need to see it as a piece and aggregate that into a new level of aggregation, into a new level of synthesis. And then we're not just getting bewitched by AI and impressed by the outputs, right? We're finding ways to bring those together in new ways. And we don't have to know. This is how creativity has always been.
We don't have to know what that synthesis is. We just have to start trying. We have to start creating new blends, new combinations. That's what creativity is, is just keep creating new combinations of things until you get it revealed to you, right? Until it precipitates out as some new level of creativity. Don't go just make the Flappy Bird software as one example. That was online. Look, I can make flappy bird with AI. I get it. It's cool.
That's a real technological feat on the engineering side of things, on the deep learning engineering side of things. That's a good achievement. But that's an output. That's a starting point because humans can make Flappy Bird, humans can make Pac Man, humans can make not as fast, but where's the creativity? That's not creativity. That's an output. That's a starting point. If you can make these basic video games in an automated fashion, how do you aggregate those together?
How do you create a new synthesis, a new aggregate out of those video games? I don't know what that is. You probably don't either. We don't need to know, but we need to start you don't need to know, but you need to start on that journey of new aggregation, of new synthesis. What is it? How can you really use AI to achieve something new? See it as the starting point, see it as a piece, not just showcasing the outputs, which I think is largely what we're doing.
And in some sense, this has kind of been a problem with AI ever since 2011, 2012. Back in the day, I remember starting out in data science, people were making, like, dancing sumo wrestlers or something, or they kind of just show these basic movements that you could do with animation. But AI was doing the movement, so isn't that cool? And it was always just kind of gimmicky and stupid, and it's like, yeah, okay, you can make sumo wrestlers dance, or whatever it is.
What are you going to do with that? Because nobody needs dancing sumo wrestlers, as far as I understand. Nobody needs that. And this has always kind of been a big problem, I think, with artificial intelligence, is that the feat is there, but it's not enough to just look hey, we can do something. What do you do with it? What do you do with it? And that's where a lot of the hype in AI does come from. Businesses have not been super dramatically transformed by AI. I'm sorry, they haven't.
I'm in this field, I do this, I bring AI to companies. And that's not to say there are no transformations or there's not definite value being delivered, but this idea that it's completely transformed no, it might be if we strike new synthesis, if we take the step up remember my episode on shifting up, right? If you shift up to the next level and create the new aggregate using AI as a starting point, which I attempt to do, but it's a journey.
It's a journey to try to figure out what that is, what that new level of aggregate is, then it might really transform industry. If you go into any industry today, airline, rail, I've worked with all kinds of industries, they've been doing the same thing for 100 years. And, yeah, there's some new technology in there, but the actual business has hardly dramatically changed. And anybody who's done any kind of consulting or anything knows this.
You go into an airport and you still see the dot matrix printers. I mean, people have been doing things the same way for a long time. We talk about technology like, it's so new and it's so fascinating, but technology that actually transforms and it can but actually transforms society or the way things are done, that's actually quite rare. I would say that's quite rare. But it does happen. I mean, human interaction has definitely been transformed under social media, right?
For better or worse, right? There are definite transformations, but it has to be something genuinely new. Anyway, I think to really get past the AI hype and to do something worthwhile, to really transform, you can't just do more efficient outputs, right? Those are just your starting points. You've got to blend it into something new. So see AI as a piece of a higher level aggregate that we're going to produce, and then we won't just be bewitched by AI. Oh, isn't it cool what it can do?
We'll actually build something new that can be transformative, that can be truly innovative and impact the quality, hopefully, for the better of our lives. That's it for this episode, as always, thanks for listening. Until next time.