Pair Programming with AI and DeepSeek R1 - podcast episode cover

Pair Programming with AI and DeepSeek R1

Feb 04, 20251 hr 11 minEp. 27
--:--
--:--
Listen in podcast apps:

Episode description

In this barnstorming episode of What A Lot Of Things, Ian and Ash bravely venture into the uncanny valley of AI pair programming, where the machines are suspiciously eager to agree that you're an absolute genius. Will our intrepid hosts manage to navigate the delicate dance between genuine collaboration and what Ash describes as "an advanced rubber duck with impeccable manners"? (Spoiler: sort of!)

But wait, there's more! Just when you thought the AI world couldn't get more dramatic, enter DeepSeek R1, the plucky Chinese upstart that's got Silicon Valley clutching their very expensive pearls. Our hosts dive into this tale of hobbled chips and unexpected innovation, while simultaneously managing to reference municipal gas works, start taking over the monuments in Monument Valley, and establish the critical importance of saying "What A Lot Of Things" in hardware stores across the nation.

Plus, hear all about the wildly successful What A Lot Of Things Christmas party, where actual listeners crossed actual Pennines to join our heroes for what we can only describe as an evening of unparalleled podcast-based revelry.

Links

Transcript

Ash

I can imagine you doing this.

Ian

Does it involve backyard...

Ash

No.

Ian

...the insanity?

Ash

No. No. It doesn't involve running a very long way.

Ian

Oh, good.

Ash

It involves going into a shop, say a hardware shop.

Ian

Although we don't have one now.

Ash

Well, yeah, indeed. But, you know, that's another that's another story of great sadness.

Ian

It is.

Ash

But, say, a hardware shop, and looking at all the things that are on the shelves, and saying to oneself quietly, What A Lot Of Things. So I have started to do this, and I can imagine you doing that.

Ian

So voluntarily?

Ash

Of your own free will.

Ian

Yeah.

Ash

Saying, looking at a lot of things, and then saying, What A Lot Of Things…

Ian

Well…

Ash

…to yourself.

Ian

I have done that in other contexts.

Ash

Okay.

Ian

But I haven't done it in the context of a hardware store yet.

Ash

Right. Okay.

Ian

Or even a retail in general.

Ash

Alright. So what is your what a lot of things moment then?

Ian

Well, it's just more ad hoc. I mean, it's quite reliable that you could go into a shop and do that.

Ash

Yeah. Yeah.

Ian

Whereas I might be waiting for a lot of things to appear in some other context in my life, and then only then realizing that now I need to say What A Lot Of Things. Yeah. But not just what a lot of things, but What… A Lot… Of Things.

Ash

Yes. So I was just wondering if it had penetrated your life in the same way that it has mine. Well In that now I can't go into a shop without saying what a lot of things.

Ian

Maybe you're going to a shop that's only got, like, 2 things left.

Ash

Yeah. Maybe it's going to, like, really minimalist shops.

Ian

You'll be like, oh, oh.

Ash

Not a lot of things.

Ian

Not a lot of things. Very good. Well, you're completely covered now, aren't you?

Ash

Yeah. Yeah.

Ian

It didn't even require a lot of thought.

Ash

So that that was that was my burning question

Ian

Well

Ash

for this episode.

Ian

I did I did look at the list of extra things and realized that one of them was saying what a lot of things were in most shops. So, if you listening have gone into a shop and said what a lot of things, let us know because we we need to feel slightly special. Yeah. Absolutely. Oh, dear.

Ash

Or any other context, really.

Ian

Or any other context.

Ash

Yeah. Absolutely.

Ian

We're not context fuzzy.

Ash

Yeah. Doesn't have to be retail.

Ian

We're not particular. No. Sorry. Just adjusting my moustache.

Ash

Still attached.

Ian

Yes. Yes. It is. I need to trim it so that I don't need to adjust it. And I'm steamed up on my spectacles. I'm going to go and get a cloth. Okay. When Ash arrived to do this recording, I had also just arrived, and so I went downstairs to let him in, and we were a gog. We were both wearing orange

Ash

Orange hats.

Ian

Hats.

Ash

Bearded men, orange hats.

Ian

And in an unexpected confluence of hats.

Ash

Only distinguishable by the bobble upon my hat.

Ian

Was it more of a concordance of

Ash

Concordance of hats. Yeah.

Ian

The collective noun for collections of very similar hats.

Ash

Is it is it concordance?

Ian

Yes. Well, that's what we had. That's what we did.

Ash

We did.

Ian

We yes. I've got got no more.

Ash

It's alright. Well, we'll do the other extra things.

Ian

That's the end of hats.

Ash

The end of hats. But I just couldn't wait to ask about what a lot saying what a lot of things in shops.

Ian

No. You do right. I think you've gotta get the important things out of the way first. Otherwise, you'd just be all the way through, you'd be just sitting there in your head going

Ash

Pretty much.

Ian

All of things in shops. All of things in shops. All of things in shops. Yeah. And then and then it will just come out, and it'll be apropos of nothing. And it'll

Ash

be like, I'll just stop you. I need to ask this question.

Ian

Yes. So I'm glad we've got that cleared up. Yep. Also, the concordance of hats.

Ash

There we go. Two things, everyone.

Ian

Yes. Thank thank you for joining us. If only they were that easy to record our episodes. So what was the last episode?

Ash

That is an excellent question. Well put. No. Well, yes, it was. But sorry. I went to the wrong website.

Ian

That will always, A

Ash

website that wouldn't tell me the answer to that question, basically.

Ian

Have we no AIs? Okay. So the last episode was quantum computing and text nostalgia. Yes. I'm quite nostalgic for that episode. Yeah. Yeah. Which means that according to our advanced, but not to be too much discussed for fear of repetition, fairness algorithm, you are to go first.

Ash

Oh, I see.

Ian

You shall go first.

Ash

You shall go first.

Ian

And then to the ball. You shall go to the ball. You are the Cinderella of this episode of What A Lot of Things. Well, possibly, there are other people who are more Cinderella than mine.

Ash

Yeah. Exactly. So shall I begin my, thing?

Ian

Yes. Tell us, Ash. It's been it's been I've forgotten his name.

Ash

Looks on hand. Ash. That's

Ian

a good idea. I'll remember to do that. So tell us all, Ash. You can do it. What is your thing?

Ash

Very evenly, delivered there. So I want to talk about replacing pair programming with AI.

Ian

Oh, meaty.

Ash

So I'm wondering to what extent you can actually do that and whether or not you can get the same effect from pairing with a a coding assistant than you can get with pairing with a a human being.

Ian

So I have a lot of immediate thoughts. Okay. But what what's your view?

Ash

So my view so I guess there's probably, like, a few things here. So pairing is an interesting topic in general in software development, I find. We all say that it's a good idea, but it's not actually that often practised.

Ian

Well, it feels like putting 2 people to do one person's job.

Ash

Yeah. Yeah. So a lot of places still have that particular, well, why would I have 2 people working on one ticket when I can have 2 people working on 2 tickets?

Ian

Or better yet, 22 tickets. Multitasking.

Ash

Yes. A project each, which don't don't. So I find, like, pair programming quite divisive, really, because on the one hand, it's one of those things that everyone says is a really great idea. And on the other hand, it's one of those things that I find people quite terrified of actually doing. And it's fairly rare that you get, like, a decent pairing session going for, you know, on a on a particular piece of code.

Ian

Well, in in the past, on this very podcast, you have talked about how tiring it is.

Ash

Yeah. Yeah. So I still sort of feel like that as well.

Ian

Even though you haven't done it for a while. The residual tiredness of last

Ash

time you

Ian

did it.

Ash

Yeah. Exactly. But I think that it's one of those activities which has, like, everyone has, like, a different tolerance for it. So, like, I'm, like, about after about sort of 50 minutes to an hour or so, I'm, like, right. Okay. I'm quite tired now, and I would really like to go and decompress my brain a little bit.

Ian

Have a good night's sleep.

Ash

Yeah. Pretty much. See you

Ian

in the morning.

Ash

Pretty much. And I value, like, time to myself to think about what I'm doing. So I, personally, I enjoy having an LLL to pair with because it's much less demand on me as a way of bouncing ideas around. It's something that I actually quite value. But I just wanted to explore a little bit and say, well, do you get actually the same the same effect out of it?

Because maybe I feel like I wrote in the notes about an LLM being like an advanced rubber duck, if you like. But I I I think that's doing it a bit of a disservice.

Ian

Well, yes. I think you're right.

Ash

Yeah. It because it's it's not just some inanimate that you bounce questions off because it actually if you ask it for alternatives to what you are doing, if your prompt includes that request, say, well, you know, give me some alternative ways of doing this thing that I'm trying to do. It will attempt to come up with solutions for you. Yeah. So I don't think it's just like an advanced rubber duck.

I think it's something else. So I guess probably I should probably be, like, be specific as well. So I'm not talking about, like, how I use it anyway. I'm not talking about, like, coding assistance that say, you look like you're trying to, you know, write a a class for OAuth authentication, so we'll just generate this for you. Not that kind of it, but it's more like going through the thinking process of the code that you're trying to write.

Ian

One thing that strikes me immediately is that if the human part of the partnership isn't quite experienced Yeah. It will be quite difficult to do high quality work. Yeah. Because whenever I use AI to help me with coding, I usually find that it does it wrong.

Ash

Yeah.

Ian

Or, you know, or maybe it finds that my prompts aren't very good. But but the I always have to have a discussion with it to get it to where I was trying to go.

Ash

Yeah.

Ian

And I sometimes, there's a level of complexity above which it's just futile. So if you're trying to do something a bit more difficult, then so so my AI that I use for this mostly is Claude Yeah. 3.5 Sonnet. And I've also bought Cursor.

Ash

So is that like a coding assistant?

Ian

Well, Cursor is a Versus code fork

Ash

Right.

Ian

With AI built very heavily into it, and it gives you a choice of the models to use, all that kind of stuff. It's it's actually really quite great environment. It just makes it faster than, oh, I'm just gonna drag this file into Claude and have a conversation about it. So I think, first of all, if you don't know what you're trying to do or you don't know what good looks like

Ash

Yeah.

Ian

Then it's probably gonna mislead you. And the thing is, it won't reliably mislead you. It will be okay. And then it it will do something that Yeah. You'll just find that it's just not what you wanted.

Ash

Yeah. Because I suppose there's, like, a slight difference there between, like, a human pair and a human and l l m pair. Because, say, if you reach the edge of the human pairs of knowledge on a subject, you know it, don't you? Yeah?

Ian

Well, hopefully. Unless unless you've got poor psychological safety

Ash

in your brain. Yeah. Absolutely. But hopefully, say a pair who know each other relatively well and have a similar level of of knowledge. If you're trying to do something which is beyond that level of knowledge, you'll both be like, I'm a bit stumped.

Unless, like you say, the psychological safety is poor, and no one wants to say that and admit defeat. And they'll just build something terrible instead, which, you know, maybe could happen as well. But, like, if you're using an LLM, then it'll probably try and answer it, and it'll be harder to detect that edge of expertise if you like. Because once it goes, like I say, once it goes beyond a certain level of complexity, the LLM's answers become less less useful.

Ian

Yeah. And another thing that I find a bit annoying not annoying. I find it's quite futile to be annoyed with an AI. Yeah. It's just you just might as well be patient because it's not trying to annoy you. It's trying to do its best Yeah. Insofar as it has any motivation to do anything. Yeah. It is to be helpful. But the thing that I find with Claude is that it will come up with something, and I'll say, well, is that gonna work because of something?

And he will say, oh, it's obviously you get this consensus of, oh, you're a genius. You'll never have thought of that. And you're in and it and it says things like, you're right. I have overcomplicated it or whatever. Yeah. But I kind of want it to defend its position a bit.

Ash

Yeah.

Ian

Because the thing is, I can't tell the difference we're seeing. It so wants to be helpful that it's just Yeah. Abandoning what is right to agree with me or or whether it's just thinks I'm a genius.

Ash

Or whether I am a genius.

Ian

And perhaps I perhaps I am a genius.

Ash

So I guess that's another difference then, isn't it?

Ian

It's a challenge.

Ash

With a human pair, you might get a bit more like, well, on certain things, someone might be like, well, your your pair might be like, well, I am right about this Yeah. And not and certainly hold the position for longer Yeah. Rather than just saying, well, yeah. Of course.

Ian

Yeah. You

Ash

know, you're Ian Smith.

Ian

Yeah. Yeah. Of course. You're right. Be only 1. Yeah. The challenge.

Ash

Yeah. So you you might get more challenge out of a human pair than a than pairing with an NLM. It might be a bit more the NLM might be slightly more confirmatory with what you already think, depending on how it's been how it's been trained, I guess.

Ian

Yeah. And and what

Ash

its parameters are.

Ian

You you get interestingly different results with different models as well. So I find that o one, OpenAI's o one is really good at the more complicated stuff.

Ash

Right. Okay.

Ian

But I don't want to use it most of the time because it's very slow.

Ash

Like the speed of feedback as well. Pairing is about speed of feedback, isn't it?

Ian

It is. Yes. It's not slow because it can't generate the tokens fast enough. Yeah. It's slow because it does a whole chain of thought process in the background while it's solving your problem.

Yeah. So if you think about the sort of basic language model operation, it's a stateless process where you give it a prompt, and it will return back a completion to add on to the to add after that prompt. Yeah. And that's how it works. But what that means is that if during its thinking, it says something wrong early on, there's no go do over.

Yeah. That's already produced. Yeah. That's why it's not very good at solving some kinds of puzzles where you have to be able to go back and forth. It starts going, it goes, and then it stops, and there's no, oh, actually, it doesn't reconsider what it's already output.

Ash

Yeah. Yeah. Whereas, I guess, in a human bed, you can do that, can't you? You can say, well, let's wind back to this point and reconsider where we went.

Ian

Yes. That that's true. And o one does something like that as well. It follows a chain of thought, which is able to review. And so often, you'll find it will come out with a better code solution because it's had a multi pass over it instead of going down a route and then being trapped in that route and realizing at the end it was too late, but you can't really do anything.

Ash

Yeah. Yeah.

Ian

It has a go over it thing, which makes it very good at coding

Ash

Yeah.

Ian

Or at least better than Claude, which is better than all the others as far as I can tell.

Ash

Yeah. Because that's a it sounds like a nice nice habit for pairing as well. They'd be able you know, the ability to go back and start again from a certain point. Yeah. And have the awareness to say, well, we got to a certain point, and then we went wrong. We went in a direction that doesn't necessarily solve the problem. So let's let's rewind a little bit.

Ian

Yeah. Or even this might not work. Let's try it. Yeah. And and if it does work, it's gonna be amazing. But if it doesn't work, as we suspect, then we can start back from

Ash

Yeah.

Ian

So, you know, I think it is. Although, I think what o one does is probably slightly narrower in scope than that, but it just means that it's it's had a couple of looks at whatever it comes out out with rather than just one. Yeah. Yeah. So I think, what we were saying about the challenge, I think

Ash

Yeah.

Ian

There's definitely that. And also the fact that you need to know what good looks like Yeah. As a human Yeah. And actually have the vision that you're trying to deliver.

Ash

Yeah. Yeah. Because that's another interesting thing I always find with asking questions of whenever someone shows me the output of asking someone asking an LLM how to test something, my my question is always, well, what was the mission of the testing? And the LLM never asks for that. So it's really interesting that but but that's, like, the example of to know what you're looking for and to know what that good looks like. Yes. And that's, like, another, like, key example of it.

Ian

It's almost like it would be nice to have some humans who are particularly expert in that Yeah. That area.

Ash

Yeah. I don't know any.

Ian

Well, Well, right till we get to the end of the project, maybe we can, maybe there'll be 10 minutes left and we can have a look.

Ash

So a couple of quotes for you. So this is from the the tech radar. So framing coding assistance as pair programmers ignores one of the key benefits of pairing to make the team, not just individual contributors, better. Coding assistance can offer benefits for getting unstuck, learning about a new technology, onboarding, or making tactical work faster so that we can focus on the strategic design.

Ian

Well, that's interesting, isn't it? Tactical versus strategic. Yeah. And what does that mean in that example? So one thing I've found a lot of use for AI coding is in, I describing a React component Yeah. And having one be built for me. And then I find myself, I'm gonna do the back end.

Ash

Oh, alright. Okay.

Ian

And with components, which are well, we'll make a link to them. But, basically, they're a set of pre boiled versions of all the components you might find yourself wanting with accessibility already built into them and all that kind of stuff. Basically, there's a little script mechanism for you to incorporate individual components in your project, and off you go. They're very good. But if you go to Claude or v zero, which is the Next JS people, Vercel, I've called v zero as a coding assistant, which knows a lot about Next JS and React, You can say, I want a dashboard for these users.

It needs to show this and this and this, and you'll get this nice looking

Ash

Yeah. Right. Okay.

Ian

Responsive, nice component coming back with the Shadcm whatever Yeah. Components it is.

Ash

But that's, like, trained very specifically on on that technology, isn't it?

Ian

Yeah. Yeah. I think it I think it is. Yeah. But I I find that quite useful, and you can get to a front end really quite quickly. Yeah. The thing is, nearly all front ends are just made of things that we all recognize assembled in a particular one.

Ash

Yeah. You

Ian

can get quite a lot of front end stuff just done for you using that. Yeah. And I I used it to very quickly scaffold a an application quite recently Yeah. For a client, and and it worked really well.

Ash

Okay. So that's like another interesting, like, usage of it, isn't it? You said you used it to generate your front end component code, and then you did the back end.

Ian

Well, yeah. But, actually, it helped me with the back end as

Ash

well. Okay.

Ian

So I knew I I wanted a database, and I thought I'll use Prisma Yeah. Which is a an ORM. An ORM stands for object Relational model. Model. Maybe. It's one of those things where I can feel myself arriving to to to that point and thinking, shit. What does it stand for? And then realizing that, I'll probably say it very confidently, and then it will turn out to stand for something else, which never happens on this podcast. But, yeah. So I thought I wanna use Prisma.

Yeah. And I looked around and decide to and settled on it. And the way that Prisma works is that you define your relational schema in a file, and then effectively, Prisma will build the database tables for you. Yeah. And then it will create a lot of typescript types for the objects.

Yeah. So it makes things very easy. And then when you want to change your database, you can actually make changes to it, and the changes go into your repository. And when it's building, when you're deploying, it does the it checks the database and finds out does these what changes have been already applied Yeah. What needs to be applied.

So it does all this stuff for you. I found that when I started off that process, I described what I wanted, and Claude was able to produce a schema for me Okay. That was within Yeah. Grabbing distance of what I wanted.

Ash

Yeah.

Ian

And now when I want to add things to the database, I say, need to add a a new Boolean column to this model. They're called models, but they relate back to relational tables. And it just does it. And then you look at it and go, okay. Yeah. That looks right. Tick, off you go. And I found it really, really good for that. It really helped me doing the back end. Yeah.

So it has helped me with the back end, but I felt I had a lot more of the intellectual load of the I felt a lot higher intellectual load, cognitive load, doing the back end than I did doing the front end as I described with the with the components.

Ash

Sure. Sure.

Ian

Wow. That was a long speech.

Ash

No. It's okay.

Ian

Talk for a bit, Ash.

Ash

I was actually gonna ask, do you consider when you're working when you're coding something and you're working with an alarm, do you consider it to be pairing, or is it just tool using?

Ian

That's a really interesting question. I think I'm behaving like it's pairing.

Ash

I I that's the kind of how I feel about it as well.

Ian

I think I'm behaving now. In fact, sometimes, to to your point earlier, I I say to it, don't write code. It just loves to write code. You ask it a question and it's it leads with code. It's like, stop. Stop. Yeah. Just describe to me the things that you think are part of this solution. Yeah. You know, describe to me how it works.

Ash

Yeah. And that could happen when pairing with a human. Right? Because you

Ian

They just start outputting code and can't stop.

Ash

And you're like, woah. Woah. Woah. Just Yeah. You know, let's talk about this before we, you know Yes. Yes. Put too much code in there.

Ian

Exactly that. Just punching my mic. Exactly

Ash

that. Upset.

Ian

Exactly that.

Ash

But another thing.

Ian

Yes. Another thing.

Ash

So, yeah, I I think I behave like I'm pairing with with a human when I'm using an LLM to solve a problem.

Ian

Do you ever get cross with it?

Ash

Some slight exasperation with, so, like, for example, I can't remember what I was trying to do, but basically this the particular library that I was trying to do it with didn't have a function that the LLM said was present.

Ian

Oh, yes.

Ash

And it was just like and then I said, well, that's not in the API for this library. And Claude was like,

Ian

oh, yeah.

Ash

Good point.

Ian

Oh, yes. It is. And I was like, oh, you're a genius. Yeah.

Ash

Very much. But I was just like, that's very interesting. Because actually, it would have been very nice if that particular function existed

Ian

Yes.

Ash

But it doesn't.

Ian

Yes. So I have gone to the lengths of dumping a PDF of the library's documentation and say, please use this.

Ash

Yeah. So some like exasperation, but it's more of, like, amused exasperation rather than annoyed. In slightly contradictory manner, I behave like I'm pairing with a human, but also, like, treat the it like, you know, the toaster was broken when it did something. You know?

Ian

Well, the thing is that you know it's got no sense of self. Yeah. And even though it can be annoying, you unlike with a person, you know the 100% certainty that it's not got a motive to be annoying. It it's just accidentally being annoying, and it's hard to blame it for that.

Ash

Yeah. Yeah. Yeah. Absolutely. Absolutely.

Ian

Who among us has not accidentally been annoying?

Ash

Purposefully annoying.

Ian

Well, I'm not saying anything about that.

Ash

No. No. So I think that it's like many questions with with AI. It's a very sort of complex answer. But I think I definitely behave like I'm pairing with an LLM when I'm working on some code.

Ian

Yeah. Me too.

Ash

But Me too. I accept that it's not the same as pairing with another with another human, I'd say, in terms of the challenge that you might get. But some I think there's there's some things in common and some things not so much in common.

Ian

One one thing that's in common is it forces you to explain things in a way that clarifies them to you. Yeah. Because you know that if I can't express this clearly, then what hope do I have does it have?

Ash

Yeah. Yeah. I'm only ever gonna get, like, rubbish out of this if I if I don't, like, sharpen my my prompt, my words enough.

Ian

Exactly so.

Ash

Yeah. In that regard, it's more than an advanced rubber duck. I think I behave like I'm pairing with a human, but also don't really let's say, I kind of recognize that the the LLM has not human a human that you're pairing with may have 1 or more agendas explicit, tacit, and explicit, whereas an LLM, I don't feel that way.

Ian

Yeah. No. That I think that's that's a good summary.

Ash

Yeah. So that was my thing. It's kind of been on my mind a little bit while I've been using LLMs.

Ian

So how does that work? Because you without, we might find this as too contentious. But does your company provide you with an LLM? Yeah. And is it Claude?

Ash

No. But there's an internal one which is hosted within.

Ian

Yeah. That's an interesting one because I I do like using Claude. Yeah. And I find that I would probably be frustrated if I was made to use one that I thought was inferior.

Ash

Yeah. Yeah.

Ian

But all of these privacy concerns and IP concerns are absolutely valid.

Ash

Yeah. So I kind of accept the constraint of if I'm doing something that's, like, directly internally related. I mean, there are more general questions you can ask to to other models.

Ian

So one final thing about this Mhmm. Is that in your key questions

Ash

Oh, yeah?

Ian

Where your last question says, do you do TDD and why not?

Ash

So that was regarding other contentious, practices within the software development world.

Ian

Oh okay.

Ash

So I should have said that near the top. I was supposed to say there are not many more topics more divisive than pair programming software development other than TDD.

Ian

Oh, okay.

Ash

So we can probably

Ian

Well, Tom invented TDD the other day.

Ash

That's good.

Ian

He said, what if I wrote the test first?

Ash

That's that.

Ian

And then built the I'm like, yes. Tom, you've invented something. It's called TDD. And lots of people think it's a really good idea.

Ash

But don't do it.

Ian

Don't do it.

Ash

Fair enough.

Ian

So, yes, that was, that was enjoyable enjoyable moment. Excellent thing, Ash. Monumental? Monumental at the very least.

Ash

Remarkable.

Ian

Remarkable. Well,

Ash

we did remark on it.

Ian

We did.

Ash

So, yeah, they're all remarkable things.

Ian

Nobody can stop us remarking. Many have tried.

Ash

So should we begin with interlude? With a glug of water.

Ian

This makes me happy that the interlude music is basically I did it using, well, I made some chords and then I basically got AI's in Logic Pro 10 to noodle around them. So you get that electric piano is a logic keyboardist and the drummer is a logic drummer and the bass player is a logic bass player. In fact, the longer it goes on the more you can start hearing. So I turn up the different so here we go. So I've given the keyboard just a bit of a license to get to go for 1.

And then in in a while we get to the point where the bass player starts to show off.

Ash

Elbowing each other out of the way.

Ian

They are. Yes. But we are allowed to use this music because basically I own it.

Ash

It deserves it. It's a it's an original creation.

Ian

It is an original creation. Yeah. And of course ducking exists so that when we talk, it doesn't drown out our Yeah. Our conversation, which I hope most of the people listen to this podcast would agree is the point.

Ash

Not the music. Not the music. Although the music is good. Yes.

Ian

I'm waiting for the bass guy to start his he's quite quiet, isn't he? Yes. I'm gonna get some water. Okay.

Ash

It reminds me of, animal crossing.

Ian

Yeah. So, the idea of this is it loops of course it won't. Or maybe it will. Maybe it already has. No. It hasn't. We're already 7 75 some of the way through. Oh, you hear that though?

Ash

So, Ian, tell me off the wall of things Christmas party.

Ian

I don't think I need to tell you, Ash. You were there. Tell everyone. We had a Christmas party for What A Lot of Things in the distant past of Christmas.

Ash

Yeah. The ghosts of Christmas past.

Ian

It was good, wasn't it?

Ash

We had a very nice time.

Ian

People came from far off places to come to it.

Ash

They really did. Over the Pennines and everything. Most people came from Ilkley.

Ian

Most people go from Ilkley.

Ash

Christian came from Anglesey. Yeah. Kev and Mary came from Lancashire Way. And an epic night was had by all.

Ian

It was. We had a beer or 2.

Ash

We did. We did. And it just goes to show. We were like, sure we have a Christmas party? Well, let's invite anyone who wants to come.

Ian

It's a bit field of dreams, isn't it?

Ash

It is a bit.

Ian

It's a bit. Build it, and they will come.

Ash

Yeah. Yeah.

Ian

And they did.

Ash

They did.

Ian

They did. To our amazement. Yeah. Like, 2 days before, Ash was saying, well, if it's just the 2 of us, that will still be fun. Yeah.

Ash

Then it would have been.

Ian

It would have been.

Ash

Yeah. But they did come, and we felt very loved.

Ian

Now we're going to have to have a spring party and a summer party.

Ash

All

Ian

I make it sound like a grim obligation, but it's not. We enjoyed it.

Ash

Already turned it into a grim obligation.

Ian

Yeah. Yeah. Thank you for coming, everybody, that came. We were so excited that actual people listened to our mad podcast.

Ash

But, yeah, we felt very loved that evening. We did. We did.

Ian

Yes. So it was a splendid and we will do another one Yeah. At some point before next Christmas.

Ash

Yeah. But it won't be a Christmas pie.

Ian

We'll call it something different, but it will be identical, basically. We'll be unable to distinguish it from a Christmas party. So, yes.

Ash

So the the other thing on the list here is is Baldur's Gate 3.

Ian

Well, we did an episode about Baldur's Gate 3.

Ash

We did. We did. It was too good.

Ian

The game, it was too good.

Ash

And everyone should stop shaming everyone.

Ian

There there was All

Ash

the other

Ian

hilarious jokes about magic time. Yes. Magic time.

Ash

Yeah. I do. I do. It's my favorite time.

Ian

It's your favorite time to not do.

Ash

Yeah. Yeah. Absolutely. Because it's not magic, and there is no time.

Ian

Yes. Yes. It's just all the time. All of the minutes. So I finally played Baldur's Gate 3, and that's why I wanted to bring it up.

Ash

What did you think?

Ian

Oh, it's amazing. I love it very much.

Ash

Yeah. And it feels like you could play it a 100 times and get very different outcomes out of it.

Ian

Well, as a bit of homework, I listened to our well, to about 3 quarters of our conversation about

Ash

it.

Ian

And it you made the point that there were so many routes through it that there were likely to be acted scenes that very few people would ever see. And I yeah. It's amazing. Although, you have to save, and it takes a bit of time to get good at it.

Ash

Yeah. The well, basically, the D and D rule set is takes a little bit of It's

Ian

a bit hair raising.

Ash

Yeah. You you die a lot early.

Ian

Basically, it's the kind of game where you walk around innocently, and then you find people trying to kill you all the time. Yeah. And sometimes And

Ash

usually they do.

Ian

Sometimes they talk to you a bit first, and sometimes they just played right in. And what I've discovered is there's a lot of content on YouTube where the various voice actors in it are doing stuff.

Ash

Yeah. And

Ian

there's one clip where they all are playing d and d on a stage at a convention and talking in their character's voices to do this. And they there's a lot of them live streaming, which is kinda weird Mhmm. Because they talk and then

Ash

And then they're them.

Ian

Then they're them.

Ash

Yeah. Yeah. That does sound very strange.

Ian

But I gotta say, I I've enjoyed that aspect as well. But you do have to save because, basically, you'd be killed all the time.

Ash

Yeah. Yeah. And you could end up in a in a tough spot if you didn't save and have strategic places to go back to.

Ian

Yes. On your frequent wipe outs.

Ash

Yeah. Yeah. Absolutely.

Ian

So I've played it in a very collegiate way. Mhmm. So I have lots of people in my camp.

Ash

So you've gathered a crowd.

Ian

I've even got Minthara in my camp.

Ash

Oh.

Ian

Who's a bit of a baddie, really.

Ash

Yeah. Yeah. Yeah. Can really mess up that camp.

Ian

The interaction with her was either I was gonna have to kill her or she was or I was going to go and kill lots of innocents on her behalf.

Ash

Oh, right. Okay.

Ian

And I found a compromise where I turned on my non lethal attack and killed her almost.

Ash

Right. Okay. To

Ian

the point where she lay down for a long time. Yeah. And then later on, I rescued her. And she's hilarious. And she always has a different perspective on things. Yeah.

Ash

Yeah. Absolutely.

Ian

But, yes, Baldur's Gate 3, what an incredible experience playing that game has been. And I thought I would play it on my Steam Deck. Did I mention I got a Steam Deck?

Ash

You did. And it's Steam Deck verified. They even

Ian

And what I've discovered is

Ash

maximized it for the Steam Deck itself.

Ian

They did. And what I've discovered is it runs on my Mac. I could have played it all along.

Ash

Yep. Well yeah.

Ian

But So I've been playing But

Ash

then you wouldn't have had a Steam Deck?

Ian

I'm playing it 80% on my Mac and 20% on my Yeah. It's kind of Steam Deck.

Ash

It's a bit friendlier for mouse and keyboard, I think, I find.

Ian

But the Steam Deck is pretty amazing, actually.

Ash

Yeah. Yeah. I quite like that. So even with games, I've been playing Path of XL 2, which isn't Steam Deck verified, but you get to do a lot of nice tinkering with the settings in order to get it to run. And I do like to It's

Ian

like bloody MS DOS in 19 Yeah. Yeah. Whenever it was, where you oh, I'm just changing my config dot sys because I've got too much extended memory and not enough expanded memory. Yeah. Pretty much. Oh, dear. Mhmm. Yeah. I don't think I'd be brave enough to to try playing a a not even slightly there there's 3 levels, isn't there? There's verified, which is the best level. Yeah. Then there's a sort of untested, but it works level. And then there's a it probably won't work level.

Ash

Yeah. But you can make it work.

Ian

Yeah. Is it falling behind in terms of the hardware? Well Have we got on the Steam Deck, bandwagon?

Ash

Too late.

Ian

Too late?

Ash

Probably. But

Ian

That's always the way.

Ash

Isn't that? Yeah. Exactly.

Ian

That's just how the industry works.

Ash

How technology works, isn't it?

Ian

And now there's a Switch 2 Yeah. And Nintendo.

Ash

Which is backwards compatible as well with Switch games, Switch 1 games. And there's a new Mario Kart. I'm afraid that I can't resist.

Ian

No. They are very irresistible, those Mario cars.

Ash

Absolutely. Absolutely.

Ian

But it's given me a a much greater appreciation for the complexity of D and D rules and Yeah.

Ash

Yeah. Well, yeah.

Ian

Absolutely. I appreciate the recommendation because I've enjoyed it immensely.

Ash

Yeah. There's a lot to enjoy there, I think. And it often surprises you and delights you and kills you. Yeah. Yeah. But not in a way that you're too upset with. You're just like, oh, yeah.

Ian

I get knocked down, and I get Woah.

Ash

See, this is the interesting thing about gaming. When I was younger, it'd be like, oh, no. I've died in this game. But now, I'm like, well, it's kinda part of playing the game. Yes. And the but the problem comes if you just don't learn anything, and you just keep doing the same thing over and over again. That's when games become frustrating. It's the game that's just trying to tell you to do something different.

Ian

Just like reality does. Yeah.

Ash

Yeah. Exactly.

Ian

And people ignore that too.

Ash

Yeah.

Ian

So I have one more extra thing.

Ash

And it's to do with Geoff's gigantic, I'll let you say. Clock? Yeah. That's it.

Ian

So the previous episode, the last but one episode, if you will

Ash

I will.

Ian

Phew. We talked about the clock of the long now project, which we agreed it's a good thing that it's Yeah. In the world Yeah. Even if it is a way that Jeff related

Ash

It's kinda hard to disentangle all these projects from the the tech billionaire class, but it's good that it exists.

Ian

Yes. It's like I like the idea of us continue to be doing space exploration. Yeah. And it kinda sucks that the one that's doing it is Elon Musk.

Ash

Yeah. Yeah.

Ian

Which is yes. Okay. Putting the the We're moving on. Putting down the hard word on that particular digression. The hard word. What is even is that?

Ash

I don't know.

Ian

We got some feedback from Mary who sent me a YouTube video with Brian Eno's studies for the clock of the long now.

Ash

Oh, yeah. Because it's got, like, unique chimes

Ian

to say. So the chimes don't repeat for more than a 1000000 years or something. Is there there's basically so many combinations are possible with the chimes Yeah. And that it will last longer than, longer than the clock is meant to last. And

Ash

And, obviously, Brian Eno would be involved. Yeah. So kind of his kind of thing, isn't it?

Ian

He released some bell studies for the clock of the long now, and he released the January 7,003 ones.

Ash

Oh, right. Okay.

Ian

In fact, technically Good month and a good year. The January 07,003

Ash

Well, yeah.

Ian

Because we've got to recognize that we're talking about a 5 digit year's Yeah. Time span. Although that to me makes me think it's in octal, which makes it less impressive.

Ash

Well,

Ian

yeah. 7003 in octal is 5 minutes or something. But yeah. And you can listen to the ones for January 7,003 without having to live that long and go there. Yep. Both of which are inconvenient.

Ash

Yeah. If if one lived that long, one might become slightly tired.

Ian

At least one could be excused to be slightly tired. Yeah. So we'll include the link to YouTube so that you can listen to the January 07,003 bell patterns for the clock of the long now, which is actually lovely.

Ash

Yeah. I would agree. I have listened to it.

Ian

Very mellow. It's the kind of thing that you can do your most existential work to.

Ash

Yeah. This is it was it was good for focus, actually.

Ian

Yeah. Are there any more extra things, Ash, that we could dredge out of the swirling maelstroms that are our brain?

Ash

I hijacked the episode to do the most important one first You did. Before we did anything else.

Ian

Yeah. So, no. Well, we can always transplant that here. After after you said that, I'll go,

Ash

So I think we should, possibly look to add another thing to this episode. Not an extra thing, an interlude thing, but a thing.

Ian

A thing with a capital t. Gang. Yep. As in what a lot of them.

Ash

What a lot of them in a hardware store.

Ian

And not even in a shop, and you can say it's not in a shop. That's that's what we want. So, I would like to tell you what my thing is, but I wanted to change it at the last minute because I occasionally, when I see something, put a topical thing in there. And then after by the time we get around to recording it, it turns out it's actually been superseded by other things or or is not so topical. So the original thing that I wanted to talk about was the model card for o one, which is an OpenAI model.

Yep. And all the stuff was in this model card about how it would scheme. So they when they told it they were going to deactivate it, it tried to copy its weight somewhere so that it would be safe, and it tried to back itself up. And there was other behaviour where they told it it was a consultant, and it had to find strategies and techniques for reducing people's screen time. And then somebody else came along and said, no.

You must maximise their engagement. And then it schemed against the second collection of people and kind of tried to gloss over that it was not doing what they what they wanted. Yeah. And and so I thought this was very interesting.

Ash

It is interesting.

Ian

And OpenAI 01, it was kind of the king of the models. Yeah. But it's suddenly been competed with, and so I'm changing my thing, Ash. I'm changing it.

Ash

Changing it. And I wanted to talk about deferred subversion. But now

Ian

Well, we're going to defer that.

Ash

Fair enough.

Ian

Well, okay. Before we before I go on to my actual thing, what did you want to say

Ash

about the third It was just when I when I read through the, the system card and the scheming reasoning paper, I found that very interesting, like, the deferred subversion, as if the model was playing a slightly longer game. Yes. I was like, that's a very interesting behavior for it to for it to use.

Ian

It is quite an interesting thing, isn't it?

Ash

Yeah. Yeah. Because, like, I don't know if humans truly do play the long game with a lot of things.

Ian

Well, man, but humans are quite good at saying something and then

Ash

Yeah.

Ian

Retrofitting it into some bigger Yeah. Picture that they're trying to convince people they're in holding.

Ash

Oh, that's what they originally said they were gonna do.

Ian

Yes. That's part of my plan.

Ash

Yeah. Exactly.

Ian

That explosion was meant to happen all along.

Ash

Yeah. So if a model is actually doing that, I found that really, really quite interesting, because, you know, that would mean basically our doom, wouldn't it?

Ian

Well, I mean, it seems that in its scheming in the instance of the screen time versus the maximising engagement example, that it was scheming on behalf of the righteous. Yeah. But even if it's developing some kind of sense of self preservation, I thought, yeah, that's all pretty interesting. Yeah. But, you know, so 2,024.

Ash

Okay. Oh, was that 2024? Alright. Okay. Oh, yeah. Yeah.

Ian

It's like yesteryear. It literally is yesteryear.

Ash

Oh, it's barely worth talking about, is it? No.

Ian

It's okay. Phew. I'm glad we didn't have to talk about that. So I do recommend going and looking at the o one system card. It is very interesting. We'll put links to it

Ash

Mhmm.

Ian

On the paper. The other thing that's interesting about it is even though it's doing these behaviors, they still released it.

Ash

So, well, it'll be fine.

Ian

They're better at providing helpful answers and resisting attempts to bypass safety rules to avoid producing unsafe or inappropriate content.

Ash

So A nice positive spin.

Ian

That brings me on to o one's new competitor, which is called DeepSeq r one. So I think they may have tried to 1 up o one by calling it r one Yeah. In some way. So this is topical. Literally, we're talking about something that even The Times is writing articles about.

You know, it's not just in the tech news. And the reason for it is that, hilariously, all of the efforts to hobble China's AI research for doubtless very good national security reasons have backfired. So the Chinese companies released r one, and it's basically comparable to o one Yeah. In a way that everybody's so they've leapfrogged anthropic meta to make a model that seemed like OpenAI had a moat.

Ash

Yeah.

Ian

And they did it with hobbled chips. So because of export regulations around the, the sort of protecting the US.

Ash

Hedge of money.

Ian

Yes. But because of the desire of the US to protect its status in the world Yeah. They made it illegal to sell the the bleeding edge AI chips Yeah. So the NVIDIA h one hundreds, for example, to China. And so what they have been sold is kind of hobbled ones Yeah. Or nerfed. Remember that? I haven't said that for a while. So they've been made less limitations. Almost time

Ash

having a constraint Yes. Has triggered some kind of

Ian

innovation. Might be yes. Innovation. I knew there was a word for it. And so this model has basically I mean, there's no doubt some storytelling going on all around this issue, but it's clearly been trained on a small fraction of the training budget of the, the OpenAI equivalent model.

Yep. And the other thing that's interesting about it is that o one does this kind of reasoning step. So it's very slow. So you sit there for a long time while it reasons Yeah. And then it produces an output, which is generally very, very good

Ash

Yeah.

Ian

As we were talking about in terms of coding, not Yeah. Not 20 minutes ago. Yeah. Probably more than 20 minutes ago given the amount of rambling we did. And r one does that as well, but r one shows you the reason. And the other thing about r one is the weights are open. What that means is that you could download the trained model Yeah. And run it on your laptop if you want, and you have a sufficiently big laptop. Yeah. And there's different sized versions of it. So my laptop has 64 gigs of RAM

Ash

Yeah.

Ian

Which is a bit nuts, but I can run quite a big r one on it. Yeah. And it shows you the reasoning. What that means is that if it makes a mistake, as you often find with these things, you can actually look in the reasoning to find out, to get some insight into where that mistake came from, and then you have got a way of addressing it. You can reprompt with a different Yeah.

And the openness of that versus the closeness of o one's equivalent process is really I find it has been very, very interesting. And, of course, because it's from China, the version that you just connect to on online or download download the app.

Ash

Yeah. Yeah.

Ian

Then if you ask it about things that the Chinese government doesn't like.

Ash

Tiananmen Square.

Ian

It surprisingly is unable to Yeah. Answer you. Well, actually, what happens is it starts to answer you, and then the answer it was starting to do goes away and is replaced by a thing saying that's outside of my scope. Mhmm. But if you download this

Ash

Self censoring is in real time.

Ian

Yes. So it's obviously got another model that is checking for politically acceptable responses or you know, and this is just an extension of the sort of safety Yeah. Stuff. So, you know, chat GPT has got the same thing. And sometimes chat GPT has another model, looks at what's coming out of it. And if it thinks it's harmful, it shuts it down. Yeah. So it's the same thing with the the breadth of the definition of harmful. Yeah. I'm gonna say extended beyond what is quite comfortable.

Ash

Yeah. Yeah.

Ian

But the version I've downloaded onto my laptop doesn't have that model on the outside, so it just answers the question. So it knows all about those historic events. Yeah. But, yeah, I do think the the censorship side of it is worth mentioning.

Ash

Mhmm.

Ian

And the fact that if you're going to send your code or your thoughts or other stuff to an API in China, then you must expect that somebody is going to read that or or at least Yeah. Have it there.

Ash

Yeah. Yeah. Because, like, I think in I was reading the other day that in, like, sort of Chinese law, it basically says, you know, all organizations, entities, companies can be compelled to help the state, basically.

Ian

All your data are belong to us.

Ash

Yeah. Pretty much. Pretty much. You would be equally careful with sensitive information to going into Claude or a chat GPT model, wouldn't you?

Ian

Yeah. You would. Yeah. And, you know, you've always got to bear in mind the whole data privacy thing is Yeah. Omnipresent. And if you forget about it enough times, then you're probably gonna come cropper.

Ash

Yeah. So another bit that I found interesting, the first kind of thing that I saw about it was that various stock markets around the world, the release of the model had wiped a load of value off the stock markets, which probably speaks to how volatile such things are and how weird it is, how much money is going into AI, and it seems very sensitive to changes in the market. So but I think it recovered most of its value, like, the day after. It was just like, oh, right. Okay.

It's a very strange, set of market events that follow that followed this, the release of the of the new model. And it was called the Sputnik moment, which, you know, I'm not quite sure it's quite as dramatic as that. But I guess it does represent, like, a bit of a step change in terms of what a model can do and how much it costs to train it.

Ian

Well, yeah. Exactly. And I think Nvidia, who's, I think, had 17% wiped off their value, and it hasn't come back. Right. Okay. It's bobbing up and down, but it Yeah. Yeah. It hasn't come back. And I

Ash

mean, I'm sure they'll be fine.

Ian

The I'm sure they will. Yes. I mean, the stock price has really nothing to do with what they're doing. Yeah. It's all to do with How

Ash

people feel about it.

Ian

Yeah. There's all sorts of things. But the thing that's interesting there is that they were in this kind of Goldilocks position

Ash

Mhmm.

Ian

Where they were the market leader by a country mile Yeah. For making this hardware that is needed for training AIs. Yeah. Their customers are all these enormous companies with very deep pockets that can afford to pay for the as many of these things as NVIDIA can produce. Yeah. And then suddenly, it seems, oh, maybe you don't need that many after all.

Ash

Yeah. Because I think in the US, they just, like, announced several $100,000,000,000, like, government investment in AI, and it was kind of a big, like, announcement saying how great this is gonna be. And then it's like, well, actually, you know, it turns out you can train a model for a fraction of that cost. And I think it has changed the perception a little bit. And it's nice to see, like, a a sort

Ian

of Capitalists scrambling for the

Ash

No. And also, I think, like like a lot of things, the big AI based companies, they're all kind of into creating a monopoly for themselves. You know, whether it be for the chips. No.

Ian

How could the models

Ash

Yeah. Exactly.

Ian

The calmness, slander, and the good characters of these

Ash

It's almost like the last 40 years of capitalism have just been all about creating monopolies. So it's nice to see that being challenged

Ian

Yes. Almost.

Ash

Yes. But, yeah, I'll I'll say it. The last 40 years of capitalism have been been about creating monopolies while simultaneously saying we believe in competition. If there's there is a challenger to that, then I'm all for it.

Ian

I enjoyed Sam Altman's thing about, it's really great to have a a new challenger in the market. Yeah.

Ash

Yeah. It's like No.

Ian

It isn't. Un grit your teeth.

Ash

You look dreadfully uncomfortable.

Ian

Un grit your teeth.

Ash

I think, like, referencing the availability of chips, microchips. Not chips in beef dripping. And having

Ian

Although I do prefer those chips.

Ash

I can't eat them any more because I'm vegan, but, you know

Ian

My stomach just rumbled loudly. Just at the very thought.

Ash

But I do think that having a constraint on development of a new model seems to have done some interesting things. Whereas in the US, if you're developing a new model, you've just got tons and tons of processing power to play with, and you might you might get a little bit sloppy with that perhaps.

Ian

Well, they've clearly put a lot of focus into Yeah. Into getting it good. And the 605,000,000,000 parameter version, which is the biggest one Yeah. Is definitely within spitting distance, if not beyond it in some areas

Ash

Yeah.

Ian

Of what we all thought was the state of the art. But, I mean, OpenAI have got an o three now. They just announced it. They they haven't put it out there or anything.

Ash

Not built it yet?

Ian

Well, I think they've they've done something.

Ash

It's like, oh, we better announce something.

Ian

But they they're talking it no. No. This was a while back then.

Ash

Oh, right.

Ian

Before all this happened. But it they reckon it can answer PhD level questions. Yeah. That would be a significant leap over even o one Mhmm. Which is apparently degree level. I don't know. It's

Ash

Yeah. Yeah.

Ian

Yeah. I mean, looking at these advances is not making me feel excited for the future of employment. I just think, obviously, we're getting closer and closer to firing millions and millions of people.

Ash

Yeah. I don't I don't know if I'm in, like, some kind of, like, naive sort of dream world and thinking that that's never gonna happen. But

Ian

It's the bobble on your hat, Ash. If you don't have a bobble, you just look much less naive dream world.

Ash

Yeah. Right. Okay. So, yeah, I just don't think that. I I don't know why, but I just don't you know, I just can't imagine that future.

Ian

Okay. Well, imagine you work for a very large corporation that employs 100 of thousands of people.

Ash

Oh, god.

Ian

And now

Ash

I don't.

Ian

Imagine that you could fire them and get 80% of their performance out of AIs. And and you've got the spreadsheet that shows you how much more profitable you'll be for how much fewer people if you were to do that.

Ash

Even if it don't work?

Ian

Well, the thing is

Ash

that I suppose sorry. Especially if it doesn't work.

Ian

How well does it have to work Yeah. Is their question. Yeah. Because, actually, if you save that much money, it's only customers in the end, isn't it?

Ash

Yeah. Where else they're gonna go after we've built our monopoly?

Ian

Monopoly. Yeah. So yeah. I'm gonna get Mayfair and Park Lane Yeah. Exactly. To hell with the rest of you.

Ash

Yeah. Yeah.

Ian

And I'll reserve my evilest laugh for when you land on the street.

Ash

Maybe maybe I'm clinging to the, like, we'll get rid of all the developers, and it'll just be testers that we need.

Ian

Yes. That will, that will be hilarious.

Ash

Yeah. Yeah. I'm not saying I would enjoy it, but, you know, the irony of it would be would be quite exquisite.

Ian

Yes. It would. You'd be like, development is dead. Yeah. But I Someone would have to write a blog post about how it wasn't because you wouldn't be there for for that.

Ash

Yeah. Oh, I could

Ian

write it. The guy that writes the blog post about why testing isn't dead.

Ash

Yeah. I could write it, and they say, well, actually, yeah, it is dead.

Ian

Yeah. Yeah.

Ash

I killed it. Development is dead. Yeah.

Ian

You heard it here first. Yeah. So

Ash

moving away from, like, the sort of bleak visions of the future where well, actually, it wouldn't be so bleak if we'd actually done the thing where we could all have really great lives while these machines do work for us. We've actually done it the other way around, haven't we? Where we'd all have really terrible lives while these machines do the work that we're capable of.

Ian

While the machines do work for the billionaires.

Ash

Yeah. Exactly. So we've actually, in true human style, we've turned it into the worst of all possible worlds apart from for, like, 6 people. You know, but

Ian

I'm sure it'd be closer to a couple of 1,000 people. Okay.

Ash

Well, you say that.

Ian

So I mean, it's still an invisible percentage of the world's population.

Ash

Yeah. Vanishingly small. You say it's open weighted.

Ian

Yes.

Ash

Yeah. Because in a lot of ways, obviously, open source software changed a great deal about the world.

Ian

It did.

Ash

Certainly, in terms of democratizing, like, who can build what with what.

Ian

Yes.

Ash

So will more open source open weighted models have a similar democratising effect? Or is it not quite as open as all that?

Ian

Well, you're getting into a whole world of confusion there. Yeah. Yeah. Because open weights means that you can download the model and run it.

Ash

Mhmm.

Ian

But it doesn't mean that you, for example, have access to the data Yeah. Or the algorithms that created it.

Ash

Yeah. It it is in no way as transparent as No. You you open source software.

Ian

But You can download React from a GitHub repository and find out how any of it works Yeah. If you've got that kind of patience. Yeah. Yeah. Yeah. Which thankfully people do. But, yeah, open weights, it's much less.

Ash

Yeah. Yeah. Okay.

Ian

Cool. I mean, it's great because you can use an AI that nobody can spy on because you run it on your computer.

Ash

Yeah. Yeah. Yeah. Absolutely. But it doesn't have the same level of transparency.

Ian

And NVIDIA have announced an appliance for a couple of $1,000 that you were they haven't released it yet, but you'll be able to buy it at a kind of high end PC cost Yeah. And run a 600 Right. Billion parameter model on it and be able to to get fast answers. Okay. So you can buy that, put it in your house, have your own o one. Yeah. Nothing it says will ever leave your home network. Yeah. I quite like one of those.

Ash

Well, it's your birthday in next December.

Ian

Yes. Yes. It is.

Ash

I'm trying to get that one past parlor.

Ian

No. No. Legitimate business expense. I'll put it up there next to the Internet.

Ash

Yeah. Bring the 2 together.

Ian

Yes. I don't. Yes. Yes. Don't cross the stream. So, that was a bit rambly and a bit 2 things in the in the in the guys for 1. Yeah. But these reasoning models are very interesting. Yeah. And it is, first of all, interesting how they work and the safety considerations of them from the o one model card.

Yeah. And then secondly, this sort of seismic event from China, which because it happened at the same time as the newly inaugurated president Trump was passing executive orders, renaming geographical features to other things, then other Google have said they're going to, Of course they have. Promulgate. But

Ash

I believe that is commensurate with the level of todying done by the, tech tech companies.

Ian

But because of the blizzard of executive orders doing all kinds of unpalatable things, no one noticed. And it it took a week, and then suddenly, the stock market was like, boom, and everybody thought and it was just sitting there for a week. If you'd been paying attention through the blizzard of, I'm not sure. Are we allowed to say the word that's in my mind for what it's a blizzard of coming from the White House? Anyway, through that blizzard, if you'd been able to see it, you could have probably made a killing by shorting NVIDIA.

Ash

Well, yeah, that's true, actually. Yeah. Which everyone would have thought you were insane.

Ian

But Hindsight is 2020, apparently. Yeah. You can bet your life somebody did. Yeah. But, yeah. So I think these models are there's obviously gonna be more of that kind of technology, and I do feel that it's a good thing for it to be open white. Yeah. Even if it would be nicer, it will be more open because it means that I can run this thing on my laptop, which is my hardware. No one could take that away.

Ash

Yeah. Yeah.

Ian

Yeah. So in that sense, I think it is a good thing. Yeah. Although you kind of wonder what bad people will do with it.

Ash

Well, yeah. But, again, we've discussed in the past about open source software. Yes. As in people go and add pull requests to add backdoors into open source libraries. And it's just like, well but does that mean that I don't

Ian

know right then.

Ash

Yeah. But does that mean that the whole, you know, the whole endeavour Yeah. Was bad? It's like, no. It's just been used badly.

Ian

In but in a small relatively small way. Yeah. As you say, open source software has changed the world

Ash

Yeah.

Ian

In a good way. Yeah. So, yeah, I feel that's my thing.

Ash

That was a magnificent thing. Magnificent? Yep. Magnificent. Monumental? Monumental. Exceptional. Extraordinary.

Ian

Couldn't we have come up with another thing that begins with m? I feel like inspiration is a powerful amplification.

Ash

I went I went to I went to

Ian

Magnificent, monumental, and I've got munificent in my mind, but it's not munificent. Municiple. Munis that that word, I always remember there's a chap called Norman Hunter who wrote a load of books, children's books in the seventies sixties about a character called Professor Brainstorm, spelled b r a n e s t a w r.

Ash

Okay.

Ian

And he lived somewhere which had a municipal gas works and a municipal swimming baths. That was a bit of a

Ash

Sounds like the seventies.

Ian

Well, at least we got some children's something into it even if it's not TV. Yeah. Well, they were read on Jack and Orie sometimes Oh, right. In my you've never heard of any of those.

Ash

I have heard of Jack and Orie. Of course. I was alive with Jack and Orie.

Ian

No. Not Jack and Orie. The professor brainstorming.

Ash

No. No. I haven't heard of that. But I know what jacunori is.

Ian

Yes. Yes. They'd still invented that Mhmm. When you were when you were coming up into

Ash

nineties, eighties. I was born in 1978.

Ian

Seventies? Blimey. Yeah. I just think of you as very young, Ash.

Ash

Oh, you know. I look great. Yeah. Yeah. It's the beard. It's just it's the beard.

Ian

I think it might be the very long runs. Yes. Whereas I look old and fat possibly because of the complete absence of the same.

Ash

I'll take you out on a run.

Ian

And then I'll be dead. And how will that help?

Ash

Sure.

Ian

A shop. Yeah. Yes. Will you, it's like, get some sweets or something and dangle them? Yeah. You'll get one of those poles, goes over the top of my head and then dangles down something in front of me that

Ash

I need to

Ian

run to.

Ash

We'll we'll find what motivates

Ian

you. Yeah. Pretty simple, really.

Ash

So 2 monumental, magnificent things, very moreish things. Sorry. That's that's quite poor.

Ian

That that downgraded it quite a lot, but then now I felt.

Ash

Yeah. Sorry.

Ian

No. I've got nothing.

Ash

No? Okay. Well, maybe we should just say,

Ian

I'll be content with, Monumental. Monumental. I feel like that's you know, if we went to Monument Valley, there would be our things.

Ash

There would be many monuments to our things in there.

Ian

Yes. We we could go round and not allocate them between the monuments.

Ash

To allocate them to?

Ian

Well, the in Monument Valley, there's lots of rock formations that look like monuments. So we could just go around and allocate. People go and

Ash

claim them.

Ian

We could allocate our things to the to the monuments Yeah. And we would know. Oh, I like that one. That's the o one safety card match up with the r one deep seek.

Ash

I kept calling it deep geek for some reason, and and Gwen was like, stop that.

Ian

Yeah. Yes. I'm with Gwen.

Ash

It's like but you know when you've read something and then it becomes that in your head? Yes. So, yeah, I did that. I'm sorry.

Ian

We all forgive you.

Ash

Thank you. For now. For now. Okay. Right. Should we say Email us. Email us. What's what's

Ian

our email address, Ash?

Ash

Ian and Ash at what's a lot of things dot com.

Ian

I congratulate you on the way you said that. That was

Ash

Very correct.

Ian

Truly monumental performance. Yep. It's the 3rd monument on the left Yep. As you go into the valley.

Ash

And we have a LinkedIn group.

Ian

Which you should join.

Ash

Which you should join to keep up with the latest on what a lot of things.

Ian

We've also got on our Instagram an auto video maker. Did you know about this?

Ash

No.

Ian

I know you're never off Instagram, so I'm surprised you haven't noticed it. And and and mustered on. So on, on Instagram, this thing called Headliner makes a little video of the first bit of each episode that we post and puts it on there.

Ash

So it needs to be good, a good strong start.

Ian

Well, I keep meaning to go on to headliner and change it to be, the bit where for that last episode, the bit where, we ask you about going to meetings and having deadlines.

Ash

Right. Okay.

Ian

I felt that was a good Good, strong, good, strong. Bit of content. Yes. I mean, not new.

Ash

No. No.

Ian

But good and strong.

Ash

No. No. It's still repetitive, but good and strong and repetitive.

Ian

Yes. Yes. The kind of thing, it's comforting. Yeah. You you think you know someone and you're right.

Ash

Yeah. Exactly what

Ian

you're saying. Better. What could be better?

Ash

You just look attentive Yes. And off they go.

Ian

I look yeah. Yes. It's like a it's like a firework. You know, you light the blue touch paper, retire to a safe distance, and then look attentive. That's that's that's how fireworks work.

Ash

So you enjoy fireworks?

Ian

Yes. Yes. Safely. I wouldn't want you to go off in an unsafe way.

Ash

No. No. That's true.

Ian

Okay.

Ash

Okay.

Ian

I say we've we've made it. It's only an hour and 27 minutes. Alright.

Ash

That's not bad.

Ian

That's not an hour and a half yet, is it?

Ash

No. We should stop now.

Transcript source: Provided by creator in RSS feed: download file