Hi everyone, My name is Patrick Akio and if you're interested in peer-to-peer software, this episode is for you. And I don't mean Skype, WhatsApp or Zoom. I mean applications that run on my phone, my device, straight to yours without anything in the middle. Joining me today is David Mark Clements, author of Peer Runtime over at Hole Punch, the peer-to-peer company. I love his energy, I love his stories, and he's done some fascinating stuff. So enjoy.
When I looked into peer-to-peer technologies, I did some researching and I looked for really good examples of this technology because I want to kind of understand the pros and cons of it. And the examples that I got were kind of Skype, Zoom and WhatsApp. And I'm like, OK, that is peer-to-peer, but there's always
a middle man there involved. Yeah, see, that's, that's the whole thing where Facebook and and so on, they they, they went from private messages to direct messages or one way or the other, I don't remember. But like either way, like this idea that the the messages direct is a lie. That's a lie. It's not direct. Exactly. Like you, you're at the very least, it's sophistry. You're twisting words to to to indicate something to the user. So many.
That's The thing is that many think that Keet, which is our messaging app built on pair. Many users think that apps already work like that, and we're just trying to make the app that we think should work like that because we agree that's how apps should work. And the reason users think they should work like that is because the companies with all of the money have made sure that the average user does think like that, because that's actually how it should be.
But then what is the difference? Because I I understand with my technology background that if I send a message on my phone and then I log in on my laptop and the messages appear that obviously there's something in the middle that just relays in the message back to me. Yes, right. But that's not the peer to pick technology that you're operating in. No, so, so with key, we have this thing called identity. Yeah. And it's it's quite similar to a Bitcoin wallet address.
You have 24 words that you record in store and then we create sub identities from the identity actually. Well, we haven't gotten into the tech yet, so but I think later we should talk about the peer-to-peer authoring with identity. But that's a rabbit trail. But yeah, so there's there's in order to discover other peers, there's something called the distributed hash table, which is nodes. All peers can be nodes in in the
DHT. All of our peers can be nodes in the DHT and they just help you discover other peers by a given key. What do you mean by DHT? DHT is a distributed hash table. It's a. It's a way of, of allowing peers to discover each other through a peer-to-peer network. So you can imagine that as the central point if you like, but that central point is actually you're inside the central point. So it's not a central point. That makes sense because you'll participate in your one of the
nodes. Maybe, maybe not maybe right. So, so with, with the identity, you create that key for yourself. And then when you log on to when you don't log on, sorry, but when you open Keep desktop app and you've already set up your identity on the Keep mobile app, for instance, you can then scan or let's say the other way around, you set up on desktop. You can then scan the QR code
from desktop to mobile. That just that just transmits your identity, which is again just like a cryptographic key. And then that is then used to allow synchronization of state between your devices peer-to-peer and they discover the devices each discover each other through the DHT. Yeah. So there is no central servers that's saying like, oh, he sent this message, I'm storing this message. And when they log in to the central service, they'll be able to see the message stored on our
servers. Yeah, that's that's not what we're doing. It's it's only ever device to device. Only on the device itself, Yeah. What would be the main benefit then? Because I never thought of this. I downloaded a key because I wanted to talk to you. And then I downloaded it on a laptop, probably where I shouldn't have downloaded it. So I wanted to get rid of it. But then I needed the messages and I needed our chat room, basically. So I transferred it to my other laptop.
I did exactly what you said. I tried logging in. I set up my identity. I filled in the identity entity on the other laptop. I only had to do a confirmation basically on my initial account and that was it. Then it was connected. That's right. And that's separate from the server where I would just have an account, I would log in and that would be that would be that. I actually had to pair these devices, right? Yeah, but why would I want that
in the first place? Well, it goes back to why you would want local state on your device in the first place. So, but you don't have to set up identity on key. You can, you could just have a separate desktop and a separate mobile app and then the application or the device itself is the identity, right? You join them up because you want your state to sync between devices, right. So it's really about the local state thing in the 1st place.
So the way that pair applications work and Keith is a pair application. So this is the generic they they're they work on a technology called Hypercore. And Hypercore is an append only log. So that log everything that you do in the application or the actions you take are appended to this to this log. So when you send messages, it's depended to log. If you like, it's appended to log. If you delete, it's dependent appended to the log.
So deletes are known, right? And then that state is private to you. No one else can see that state. So you have an auditable log of everything that you do only on your devices. That means that pre Internet the situation was that if you were accused of a crime then you would produce. If you and you were innocent, then you could try to produce evidence to prove your innocence, but you couldn't really be mass surveilled for
guilt. We now live in a different era where mass pattern matching and all of these different things are, are happening to to basically be, be, be then be used as evidence against people. And, and, and like, that's fine if, if, if we're, if we're putting, if we're, if putting people in jail for things that we all agree are bad, but that level of power for someone to have, for anyone to wield is
really problematic. If it goes in a direction that's just like, you know, well, it's not really a what you're saying is a crime isn't really a crime, but the state says it is and they can mass avail for it. So everyone you know you're going to jail because of whatever it is, Maybe it's being Muslim, maybe it's, I don't know, being super right wing or but like just like privately or whatever right or super left wing, like whatever it is, the point is, is that you absolute
power is a dangerous thing for a human to have. So the append only log only living on your device, that brings us back to that time where if you're accused of a crime, you can prove your innocence like better than ever before because you have a full log that you can just go, OK, well, let's just look at my history. No problem. But if if, but you can't be mass availed under this under this approach. So beyond that, there's, there's a bunch of other benefits if
you're transferring files. So like you, you yourself know and your producer know that like, you know, large gigabytes of data, they're a hassle to transfer to between devices. Oh yeah, yeah. You have to get some kind of iCloud account. So and then you have to pay them a monthly fee and then you might hit limits. Yeah, they complain every single time when we upload new files that we've overspent basically right in capacity. Well, if it's peer-to-peer, it's free.
So you can you can send TB, you can send a TB gig video file if you want. You just drag and drop it in to keep desktop. Now that's synced between your devices. You don't even need to send it to your device. So if you've got your desk, you have to leave your desktop on so you can download from it. But if you want to put a video on your desktop key and then view it in your mobile key or iPad key or whatever, then fine,
no problem. And it's, it's instant, it's unlimited, there's no monthly fee. There's no complaining about how much data you've used because a lot of the time you're just doing local network transfer. And the fact that like it's 2024 and since since we've had this technology, this is where we've ended up. It's absurd. It's patently absurd. It's, it's a monetization strategy, but it's not, it's not technologically needful in any way that it works like that.
I mean it, it looks like I need it because I don't know of an alternative, but if I can sync between devices then obviously the middle man I don't need because usually I am transferring files from for example the studio in the backroom there to my laptop or to somewhere that needs to editing. Yeah, you can also send to friends and and your producer or whatever.
So if, if your producer has key and he needs to send you a TB video file, you just, he says it's coming and you just, you just get it from him straight away. You just download it whenever you need to. So that's, that's huge. And a lot of people don't really understand that because again, we've, we've been primed to, to accept the status quo as is by the profit incentives that have been happening since the 90s, at least the 90s.
It's like when I was a kid growing up in the 90s, the major thing that brick and mortar people were saying, because I was a little bit in tune, you know, like I was, I just got on the, I got the Internet when I was 12 and it was like, like Internet, IE 5.5 had just come out, right. And like a lot of, a lot of lot of brick and mortar people saying like, well, you can't monetize it. You can't monetize the Internet. It's not monetizable.
And then you had the.com boom and bust and all of that kind of stuff that went on. But part of that monetizing strategy, of course, was to create a a dependency on a centralized approach because it it wasn't conceivable at that time as to how you could monetize the online world without some form of control in the middle. How do you do it without middle manning? And that's still a question to a certain extent today.
But I think that we have enough understanding, like we have enough mind share among the human population because you need that to start off with right from, from like what's already happened. I don't think it's been bad. No, it's just that it's going towards an an extreme now that could get very bad, right And so. We need to in the in like our sense. Yeah, I think so, yeah. Because I've also, I've thought of this centralization pattern, right?
And how it originated. I understand where we are now and I don't think it's feasible. Also looking towards the planet because I'm now like I am a product manager at ING and I'm actually in the ESG space. And we saw and we analyzed absolute emissions intensity emissions which are same but specific to the assets that a company holds for example. And we saw Microsoft's and their chart with regards to absolute
emissions. Nowadays, because of the EU stepping in and regulations changing by I think 2040 or 2050, we need to be at a net zero with regards to the companies that operate in the EU. So legislation is making sure that people think about their spent right and start giving back on the planet, not just anywhere on the planet, but actually where they do their own business in that way. And with the AI boom that we had somewhere late last year, compute went up ridiculously.
Microsoft was on trend to hit their targets for 2040, for example. And then because of that new technology, compute went up, which means their absolute emissions as the central party that provides that went up as well. And I think nowadays, what's interesting is what you're working on now that we have more compute power, let's say locally, because machines and hardware have increasingly more capabilities. Capacities. It's incredible. Then we don't need this centralization anymore.
But then I'm also interested in with centralisation also comes kind of ease of use in some cases, right? Yeah, well, it's always a trade off. It's a trade off at the end of the day. But what we're doing is we're making for the user the ease of use, essentially transparent, like as much as we can, right? So if you use keep, it feels like a messaging app. Yeah. The only real tell is when you set like when you send a message, it's fine because we, we do this thing called blind mirroring.
So if you send. So if you send a message from 1 device to another and both devices are online, that message is just going to go straight from device to device. If you're not online, how does it get to you without a central server? So we have peers that also do hold an encrypted packet for you, an encrypted BLOB, and then when you're back online, relay that BLOB to you. And so large images or large messages, if you like, the text is fine. It's always fine.
But once you, once you go into images, like there's a limit to what a what a peer was going to want a blind mirror. And so the, the tell within key is like, sometimes it will say like, oh, this image is download and key works peer-to-peer or something like that. So like, that's an example of
the sort of trade off, right? But like we also have to think about how the system can grow over time because instead of paying however much you pay for cloud and having all of those issues we discussed, if you want to send images to another peer or even say TB video files or whatever to another peer while that peer is offline, then you could participate in a marketplace where peers who are willing to lend their capacity will do so for a few SAT's or whatever it is.
And so you can commoditize the resource. You're not you're not looking at, you're not looking at, you know, this, this monthly subscription model that's like, yeah, cost and stuff have factored in. But like, what's the margin on that? You have no idea, right? If you commoditize the resources, like it's just a market for commodities, right? Interesting. We have this concept in the energy market because companies, I mean I know this because I'm
in the ESG space. Companies can do, for example, the electricity as a byproduct of whatever their supply chain is, is most likely going to fall into their scope to emissions and they can offset that. Scope two is usually broken down into market and location, and location is on site wherever their factories. For example, the electricity usage would be there and that would be the absolute emissions.
If they then buy energy certificates from people that have solar panels or bigger windmills, they can say they have green energy and that would go into their location. No, that would go into their market absolute emissions. So what you see is on site you have your location emissions, your factory uses electricity. And if I can buy enough energy certificates that say that my energy is green, I basically have green washed all my electricity usage.
And a company can claim that they use green electricity, right? But they only do that by buying the energy certificates. So they're paying for the compute or paying for the usage that someone else has done. Yeah. And me, if I have solar panels, I can actually contribute to that grid. I can say I have these energy certificates. I can sell them to Google, whomever. I mean, as long as those energy certificates are valid, I mean, it's, it's kind of not the worst thing ever.
No, no, it's better than not, but. That's the existing system that we have. Yeah. I've never thought of it to be used with compute because that looks like a missed opportunity or an opportunity we now have with hardware. Well, we have it with hardware and we have it with a peer-to-peer system because, because if you try to do this via a centralised entity. There's no incentive.
Yeah. And it's just it's just going to be very boxed in. You know, you might get some users, you know, if it's a really large, let's say Microsoft tried to do it or something like that, you know, but I think it would be a very self-contained thing. And I think what I'd rather do with with the P2P approach is create markets and participate in them. That in itself is a monetization strategy.
But the the AI thing you brought up as well plays in here because with centralization, as you noted, investment in AI with a goal to increase centralization has gone bananas in the last few years and it's sucking resources from everything else. And I don't know, maybe AI hits a limit. I don't know. I don't know. It seems to have slightly plateaued maybe, maybe. But either way, I think a much better way to do AI.
Let's AI has been thought about for like decades, right And you've got all these like you know, what are what are the logical extremes of AI? Like what scenarios can we end up in? And the hyper centralized scenario is like one of the bad ones cause AI gaining sentience. Probably unlikely the the the bigger threat is a human controlling an AI because an AI can do mass scale kind of things. That the.
Humans bidding and again, that's a that's an issue of, you know, it's it's very Lord of the Rings, but it's, you know, it's an issue of having way too much power, you know, so within a peer-to-peer system, as you mentioned, we have devices that are capable of running local LLMS, right. So you could have a peer-to-peer based LLM that that works on your devices with your identity, syncs between devices, knows everything about you, but on your device.
So it's the same as just having data on your device, whereas an AI that knows everything about you that's someone else owns, well, you're vulnerable if that's the case. Yeah, and that is the case. We're all vulnerable. So being able to have sort of an AI that that's sort of limited by the capacity of your device, it it's good.
But like, you know, it's not going to compete with the what they're putting into very powerful AIS because it comes down to how much compute power they're going to grant it, right? But but if we have this marketplace of peers in the commoditisation of resources, then AI compute is just another resource. And from there, users can maybe hook their LLLLLM up, sorry, with, with another peer to give them additional compute power
for that. Like the, the, the technological possibilities of this, like whether this is actually possible or not is like debatable, right? Like with this is a research thing for, for hole punch. It's not something that's going to be around for a while. But like, like in terms of envisioning like how, how that can go in a way that's not hyper centralized. I think we need to do something like this once. Once you have that, then AI becomes our tool, not a, not a
hyper. Centralized, elitist sort of control tool, you know. Interesting. I was thinking, so we talked about keep for a long time and I can see it from a messaging perspective as well as kind of the audit of my data being on my devices or as you mentioned, with that mirroring mechanism being relayed back to me whenever I need it on a new device. And my device is just broke. An EMP hit my house and I don't
have any devices anymore. Like worst case scenario, I still have my data just by virtue of having it be relayed through other devices. And then I was like, OK, but what about compute, right? This peer-to-peer network. I can see it for video messaging. I can see it for file sharing. A file sharing is a problem I have just because I do podcasting and I have huge gigabytes worth of files and I would like to keep them, ideally
somewhere. Right now it's on a drive and I don't mind, but I don't know if I could have it by myself if I would choose differently. Basically because I don't have an option currently. Or maybe I do and I don't know about. It you do have an option if you use key. Exactly, which is something new to me. But then you talked about decentralized compute and I was like, first of all, this sounds very complex, but it still sounds possible, right?
Because in in essence, that's what the cloud also does with horizontal scaling, with auto scaling in the first place. Ensure they might have bigger machines on a vertical level here and there. You pay a lot, you pay a big premium for that, but horizontal scaling is basically what they do in essence most of the time. Yeah, yeah.
So with the centralised model, what you're doing is you're you're you're sort of designing and building a system and that system sort of is a self-contained thing, right. What we're doing with pair is we're we're pair is a is essentially I like to call it a software fractal like it's a self repeating piece of infrastructure, OK, self repeating piece of infrastructure that builds the system from itself through participation.
And if you think of that as a primitives all the way up to like communication markets and different things like that, then the same principles can be applied to to to storage capacity, AI compute all all the way through. It's just you're turning the system inside out and looking at it from the inside. And if you do that, we are the system, We all participate in the system and the system is like essentially equally working for all of us.
Do we need to get to a tipping point in adoption before that can happen? Oh, 100%. That's what I'm thinking. Yeah, definitely. And that's why I said earlier like you have to think about how this can play out over time because those markets don't exist yet, but the potential for them does. And it's been unlocked by the pet technology. Yeah, it's a, it's a bit of a journey, you know, and there's, there's multiple avenues that we're that we're, we're we're
exploring around all of this. At the minute where we're at is we are getting a lot of adoption of keys. It's very difficult to to measure that in a peer-to-peer system. We of course have mobile downloads as they go through the app stores, but there's a the the DHT can you can sort of see the DHT is a global DHT that all peers use, not just KEET. Yeah. So any pair application can use as the same global DHT.
But the we, we sort of roughly know who's like what, what apps and whatever are using the DHT and KEET. The presence of Keat is so large in terms of adoption that we can see that spike in DHT usage. So we know that that Keats growing, you won't see a lot of reviews on the App Store and all that kind of stuff. But like, if you think about the type of early adopt early adopters of like private secret messaging technology, you're not going to get a lot of reviews.
Those are not the people that leave. Those are the people that read reviews. Interesting SO. If it was not a global DHT, then you would be able to see your usage, right? Because then it would be specific for the network and the participants there, Yes. Interesting.
So if you launch your own, you can launch your own pair app, OK, using pair, using the same, the same DHT, but you'll just be like unless you unless you can grow bigger than key, you're not going to be very observable at all. No, It's the biggest sort of players that are going to be observed, observable, and that's pretty much key to the minute. Interesting. I wonder if you'll ever get more insights in usage then, because that's still from an adoption
standpoint. And especially when we're talking about that tipping point where more becomes possible and you have this decentralized opportunities kind of arise by virtue of participation, you want those insights? I would say no. Theoretically, but
philosophically, not so much. Because is, you know, privacy is, is an important concern for us and we don't want to, we have to be very careful the the technology that we can't, we can't make trade-offs in the technology that allow for the kind of monitoring that you might want here, right? We can't do that and we won't do that. So we're actually happier to not know and just let things play
out interesting. And we're not, we don't use the, we're sort of back into early Internet kind of development model. We're not doing AB testing, we're not doing all that kind of stuff. But it's a communication app, right? And so we have, we have plenty of various different keep rooms, which by the way, keep room can scale infinite. It's not infinitely, but we can scale to the planet population
size. So theoretically there might be UI bugs that would stop it from getting all the way there. But theoretically, in terms of the algorithm underlying it, we can scale to the planet. So we have like we have a, we have a Bitcoin room that's got like 1700 people in it last time I checked. Might be more now. Yeah, not bad. But so we have rooms where people are giving us feedback constantly about the application and they're using it to give us that feedback.
So we have that advantage of like making a chat app and people being excited about it and then using it and giving us feedback. So that that works very well. It's more qualitative and and certainly more of an art than a science though. Yeah, I like the no compromise on technology because when I said it, I was like, this does sound like my view is kind of anchored by the centralist systems that we use, right? Absolutely. And then I was like, OK, yeah, you can measure your usage.
And that's the only reason you can measure it is because everyone goes through this kind of funnel in traffic and then, you know, usage. But if you don't have that, then how do you measure it? Compromising the technology to be able to measure it, but that will be compromising the technology. Yeah, I mean, there's, there's maybe there's maybe other things where we can infer things, you know, very anonymously without compromising the technology,
right. But it's, it's definitely an important, an important principle to keep that we don't, we don't compromise it. But I think you said about being anchored in the the centralized way of thinking like 100% pretty much everyone is. I think it makes sense myself. The systems we use of. Course, I myself also and I still have these things where I go, no, wait, I don't need to do
that anymore because XYZ, right? And actually, what you realize is you go, as you start to go down this like peer-to-peer path is there's an unbelievable amount of complexity in the current status quo that we all accept as reality. When you go peer-to-peer, things get simpler. Do they? Yeah. That blows my mind. Yeah, because I was thinking, OK, how do you do this like on a software level specifically?
OK. So somewhat sometimes on a software level things don't get simpler, but once you get it to a user place or a a way of reasoning about things, you don't have to go also it goes through here and then that that goes with that and it's just like A to B, you know, directly. Yeah, yeah, yeah. But from a software level you
were saying? Yeah, that's what I, I mean, that's where my mind was, because I know you're in the driver's seat and you're creating part of the protocol as well as using it for this application. Not so much the protocol. My my thing is the the pair pair runtime sort of a platform for both running and developing peer-to-peer applications. So it's a lot of it's the dev X kind of side of things. It's, it's pulling all of the technologies together to make a coherent journey, user journey
for, for, for doing this. And would I be able to do that as well? Yeah, it's it's, it's really simple to be honest. OK, you basically you say there's there's it's a command line based thing. You say pair run and then a link. And the link can either be a file link like your file URL or
it can be a pair URL. If it's a pair URL, then that's the first part of that is the key that's for the app that's discovered through the DHT, OK. And if it's a file that runs from disk, so that's running an app to to get an app to be discoverable through the DHT, you have your your project folder and you just run pair stage and you give it a channel name and then that generates a key for your app and synchronizes the disk contents into a hypercore that I
mentioned earlier. The then you do pair seed and you, you use the same channel name once and then it just seeds from your machine. Then you can go to another machine or someone else can run it and they can do pair run and the the pair link that the stage command and the C command gave you same link. That's it. OK, that's it. That's the whole thing. There's other stuff that's beside that. There's pair release for like managing release markers and
things like that. So you can say like this. So it's an append only log, right? Every, every file that you synchronize to it increases the length of the log. So whenever you say, OK, well, this length, that's the official release, but I can keep on staging and then I can look at preview releases. So I can do pair run, checkout staged or power and checkout staging to see the preview release of, of a product. But when I just run it with pair run, then I'm just looking at
the the app Beyond that. What we do with keep is we have a downloadable DMG for Mac and E so. I used. Yeah. And all that is is a Genesis pair build. And that Genesis pair build runs. Pair run. That's exactly that, yeah. That's it. That's it. Wow. So we've got, we haven't got this yet, but we've got a pair build command coming. There's some complexity on Windows to still figure out because Windows is Windows. Oh yeah.
Anything that takes us like a day on Linux and Max takes like two weeks on Windows. If it's a little bit, it's a little bit weird, you know? Push that back. Yeah, yeah, yeah. But we, we have a pair build command coming so that once you've once you've staged your app and whatever you can just say, OK, now I want to build it.
And it all it does is create these same wrapper files that just do pair run of your app key and then users can download it and it just feels like a normal app to them the same way key. Does very interesting. I'm definitely going to play around with it, I think because I love what you're doing currently. I love when people are working on newer technologies that might actually be truly disruptive,
right? Because people usually label technologies as disruptive, but I think it's always debatable. But I think something like this, which would change the way that we operate as well as even the incentives and the structures, the hugest pillars that we have and what kind of decentralize them, I think is fascinating. Yeah, because within all of that that I just said there is no infrastructure other than the machine that runs it and the machine that seeds it.
That's it, that's it. That's a peer-to-peer situation with I, I did quite a bit of consultancy and in order to, you know, you had to, you had to have a bit of a margin at the beginning of a lot of, a lot of consulting gigs because you first few days you can't get do anything because you have to figure out, oh, oh, there's AVPN. Oh, OK. Oh so oh I need to set up my exec host file so that it redirects this IP to that. Oh, OK. Who gives me?
Access Oh, who do I, oh, they won't give me access because I'm not on the employee register. Oh, OK. So how do I get around that? You know, and, and so like there's so much friction in the way even just for onboarding new devs into, into centralized companies, the wastage, the, the cost on, on compute and all of that. It, it's, it's really wild. I think in some countries, I think in the USA and so forth, it's, it's written off as you know, you got CapEx and OpEx, right.
And the tax incentives on OpEx are different on CapEx, right? So to a certain extent the there's even within.
And I don't fully understand this, but like my, I, I, I might be wrong on this, but my understanding is that the, the, the systems themselves sort of incentivize this approach of just spending tons of money on infrastructure and staffing infrastructure, which is interesting because it's like, you know, a centralized government is, is, is trying to incentivize centralized companies. But I don't think that was an explicit attempt to do that. But it's just an inherent bias in the system.
Exactly how did you end up kind of collaborating with hole punch and doing what you do now? Because I, I think it's fascinating, this vision and it sounds like you're really driven by it. Like I talked to you now. I talked to you in previous conversations. I love the energy and the passion that comes from it, but I'm really curious how you ended up here because I I've seen this kind of on the sidelines, but you dove had in had first, it feels like. Yeah, I mean, it's kind of a
long story. They were doing a podcast. So I, I started programming when I first saw a line of code when I was five years old. And this, this guy who was into computers came round and plugged a Commodore in and he wrote 10 print David 20 go to 10. And I was like, mind blown. I was like, whoa, that's my name. And it's doing it over and over again. And all you need to do is type that. And like, I was thinking about that for a long time.
And when I was nine, I got my hands on an Atari and at a boot fair that's like a garage sale, boot fair, flea market, all the same sort of thing and people from different things. But where I, I grew up in the southeast and they call them boot fairs there and of, of the UK and the, and I got my, and I got my hands on a programming basic book. So I sort of taught myself BASIC through, you know that.
And nine. Yeah. Incredible. And then when I was 12, as I said, like IE 5.5 would just come out right. And I had a single, single mom, but she sort of recognized my like pension, you know, for computers. And she, I think she had two jobs and she had two jobs. So she got me the computer and she got me a dial up modem. And back then like dial up was super expensive and stuff.
So I dial up load a page back in the day like there were these websites called JavaScript goodies and HTML goodies and I use those websites to learn HTML, JavaScript and CSS and I just like dial up load a page and then cut it off till I try and save money. I later found out that the dial up is the most expensive part. You kept doing that over and over. That's hilarious.
And like throughout, throughout my childhood, I, I mean, I destroyed my uncle's computer, I destroyed my grandma's computer, I destroyed my own computer as many times. And I'd always just be able to, well, I would destroy something and then fix it. And that's sort of how I learned as well. But I definitely, I definitely irritated the extended family with my shenanigans. But on a software level or like physically destroyed the. Oh, I didn't take a baseball batter or anything like that.
I would just be like, oh, what's this? Can I have a go on your computer, please? And then they'd be like, Oh yeah, sure. He's a bit of a Wiz kid, you know? And then I'd get into like, oh, what's what kind of, what kind
of computer is this? Like my uncle's an engineer and he had some kind of, I don't know, I think it was maybe post six based or something, but you could run lemmings from the command line on it. And I was trying to do that and then I just got distracted, you know, and I was like, what's this dude? What's that do? And then the next thing broken, you know? I love the driver of curiosity and how powerful that can be.
Absolutely. And I think ultimately that's the question of how I ended up here. Like that's the ultimate route, you know, following curiosity and and also just saying like, hey, you know, just do we have to be like this? You know, do we have to? I don't think we do. Like, you know, humans aren't bees. Bees have to be bees. They have to be worker bee. They have to be or whatever bee they are, but humans self organize and take up roles as
they see fit. And like, so I don't know, I just think there's a really interesting connection between technology, communication and how we organize ourselves, you know? Yeah, but so. So skip forward to like about age 23 or so, OK, I, but I didn't realize that being able to do these sorts of things with computers, with computers was actually a valuable skill. OK, in my head I was just like, well, anyone can do that. It's like washing dishes or you know what I mean?
It's. Probably normalized because you did so early on. Yeah, it was just normalized. So I mean, I was working at McDonald's and then I worked at, well, I worked at various different places. I worked in like a bank for a while as a cashier and like McDonald's. And then but after that I worked in a homeless hostel and it was 12 hour night shifts and 12 hour day shift. I do 212 hour days and two 12
hour nights. And on the nights I started to build a website for the hostel, which was which was all cool. And then I start and then I started getting like people sort of paying me to do websites for them. And I'd sort of do that a little, a little bit on the nights because like you, you'd have like this five hour sort of period where everyone's asleep. Not always because some things sometimes went went mad because it's a homeless hostel, but like
sometimes. And then I started thinking I was doing PHP and stuff and I started thinking I need to I need to find a way to. To make these to make these applications more real time, you know, started looking at Ruby on Rails and then I came across node JS and at this point it was about 0.8 and I was just like, whoa. And at the same time I came across Bitcoin and Bitcoin at the time was like 2 pence a Bitcoin. Oh, really? Yeah.
And I looked at both of them. I was like, well, I don't really have much time, so I can only really go in One Direction. And I'm trying to like, you know, build websites and stuff. So I'll go, I'll go look into the node thing and I'll leave the Bitcoin thing. Life decision right there. I even at the time I was like, oh, if I can just put a 1010 LB into it or something like that. Even at that time, 10 LB was a lot to me for various other reasons, right?
So, so, so I, I went into looking into to node and, and at the time there, there were no books on it. There were, there were, but there were just rehashes of the docs and the, well, one of them was just garbled nonsense and the other two other were rehashes of the docs. So I contacted a publishing company because I was like, well, I'm looking at the raw code here and look at the raw code of Express and like figuring it out.
So I wonder if I can get a publisher to just pay me to do that and then it'll incentivize me to do it. So I did a proposal with with pack publishing. And then it's just, I think that the timing was just right and they were absolutely, yes, let's do it. So I ended up writing note cookbook and then I did three more editions over the years. Awesome.
But with that, because I was living in Northern Ireland at the time, I did a bit of asking around on the Google because all the time it was like the Google forums, Google or something. And they said, oh, there's actually a conference happening. Someone said there's actually a conference happening in, in Dublin. And it was the IT was the first node conf. OK.
And so I start. I can't remember exactly how it all happened, but basically I went down and they, they gave a shout out for the book while I was there, you know, as a, as a thing and. Because you were there? Or just in general? Well, I sort of talked to them before I went down and they're like, come on down, we'll do a shout out for the book and stuff, you know, good stuff.
At the time I was working for this company called Pace, which do like the Sky set-top boxes, like TV set-top boxes and stuff like that. And they had interfaces in JavaScript and HTML. Yeah, nice. So I was building like sort of Netflix style kind of navigation stuff for them. But the people that organized no conf was a company called near form and near form were very early in their in their at that time.
But they, there was, they were bullish on node and they were shortly, very shortly after bullish on micro services, very ahead of the wave there. So I ended up working with them through knowing them from the conference, going down and Speaking of meet ups and different things like that. And they, they hired me as the head of training, But I ended up going way further into the consultancy side of things with
the consultancy team. So near form at the time was building really like really cool stuff for, for companies that needed, you know, to update. And we went into sort of the realm of digital transformation. And that's like, so you're going from like Node, because Node is JavaScript in general as an invented language is very good for mediation. That's why that's why pairs also JavaScript like the, the being able to mediate between within an invented an event loop within
an invented language. It, it, the cohesion's really great. So once you start thinking digital transformations, you come across Conway's Law. And Conway's Law is the the idea that there's a relationship between communication and technology. If you have dysfunctional communication in an organization, that organization produces dysfunctional
technology. And I'd argue that most of the technology we use today is in in some ways dysfunctional and dysfunctional to us as, as, as people at some in some cases, but not necessarily dysfunctional to the organization because it's intentional that they, for instance, don't allow you to give feedback on certain things because they control the app experience to that level, right?
That sort of thing. But even even the unintentional stuff leaks out, you know, So what we did with the digital transformation stuff was outline a path for companies to move on to a technology architecture that that tweaks the way they
communicate at the same time. And if you can do that, you can kind of double bubble it where you, you give them, you give them the, the digital transformation they need in technology, But you've also got to work with them at the, at the sort of front lines and the executive suite levels to basically be like, this isn't just technology, this is
culture. And honestly, from my perspective, you, you needed that to kind of grease the wheels because it's like this, you know, when something's very, very large, I don't think you need that. But when it's at a certain scale, even when it's quite large, you still need that sort of greasing of the wheels, as I
say. But but I think from my observations, what it seemed like to me is that a lot of the time it's to do with the, the major factor is a technology change because executives change over time, frontliners change over time, but the culture and the way things are done are
constrained by the technology. So one of the things that we did, I did quite a bit of was the was creating conventions of how to develop within an organization, but then enforcing that through actual tools and code which I'd written specifically for that purpose. So or or use from the Emperor source world. Like a really, really obvious one is like a pre commit hook, which as an example, like if you if you need to link your code before you commit it, right.
Yeah. And you say, OK everyone, we're all linting our code before I commit it. Cool. Everyone goes, yeah, no problem. And then no one lints their code, right? And so, OK, so you put CI in maybe and then now you're constraining it so the unlinted code doesn't get into the code base, but you're still using up CI resources by not doing it.
You take it onto the user's machine, the pre commit hook, sorry, the pre commit hook runs and and now they can't commit unless they fix the linting issues so that that technology constraint like as a small as small as it is, can have a massive difference as you scale it up. If you if you have thousands of people suddenly using a pre commit hook like the the reduction on CI compute is massive, but also the communication changes. People aren't mad at each other anymore for not linting the
code. Because it doesn't happen as often. Because it doesn't happen at all, because they can't commit without unless they force it, and then that's a whole. Different. That's what I would do, yeah. I mean, there's that, but then like that's, that's different to being like, oh, sorry, I forgot. That's why I deliberately. And so now the communication's clearer and you can deal with it more. The technology is clarified and, and now there's less
dysfunction. Like that's just a super small example, but you see how like something small can have a big effect. And so with all the consultancy and all of that, I also met, I was also speaking at conferences and I spoke at a conference in Oslo about micro services and someone else was there doing a talk about Merkel Dags. OK, very specific, yeah. So Bitcoin uses Merkel trees. I've heard of this before. I'm probably going to, I'm probably going to butcher this explanation.
But but, but basically it's to do with, you know, how you've got the blockchain and then you've got like the hashes of the hashes and all that kind of stuff. Like somewhere in there, there's this, there's this thing called the Merkel tree. That's part of that whole blockchain process, right? Merkel Dags are a kind of inverted tree something something graph. I can't remember which is which is funny because it was Matthias Booze who gave the talk and he's the CEO of Hole Punch now.
And we had a chat at the end and I was like asking him more questions about the Merkel desert. But this that it's pretty funny. I can't explain it now. Is it out of curiosity purely? Yeah, well, I was like, I was like, wow, this is really interesting. But, but remember, like I, I'd looked at node and Bitcoin at the same time and I'd, I'd read the the solving of the two generals problem white paper, the Satoshi white paper and all that kind of stuff, like very, very early on.
And so when I saw him talking about this stuff, I was like, whoa, because I've been all doing the consultancy thing and like what, you know, I would do three month gigs. So I'm going in and out, in and out, in and out, in and out. Completely different. Yeah, and, and so, but I'd also, I'd also had an awareness of Bitcoin and what it was doing and stuff like, I don't know, personally, I guess I'm a maxi because I don't don't really rate the other coins. Like the breakthrough was Bitcoin.
Everything else is experimentation extra. Yeah, it's experimentation and, and, and yeah. So, so that then lead to well, that didn't lead to it, but then skipped forward a bit further. And Matthias is actually working with Niform also as a consultant and as a a specialist in general. We also did performance work myself, Mateo Collina and Matthias booths were performance specialists as well. Like how do you make the code
really, really fast? That's another thing is like if you can, if you can speed up the speed at which a technology runs within an organization and you still have an effect on the communication, right. So, so Matthias had been experimenting with peer-to-peer stuff from, he's Danish, right? So like the Danish, the Danish sort of education system is, is fantastic and you sort of given time to explore and all of that.
So he's come out of it, out of the education system, got his, I think his math degree or whatever it is. And then he's been like, oh, I like, I like BitTorrent or whatever, right? And he started messing around with that, right? And gone from there. Fast forward a bit later and he's created this thing. Well, there was this thing called DAT, sorry, which was, which was an academic funded program for transferring large amounts of data because academia has a need for that, right?
And, and they, you know, back then there wasn't even really cloud offerings. So this was like really novel. So, so DAT was born from that. And then the the, and there's like a whole dat community around this that exists online. That's a peer-to-peer community. Shout out to the Dat community. But within that there's hypercore. So the the the primitives of that, which after a while sort of sort of dwindled off. OK.
They went into a company that Mathias then started called Hyper Division. OK, so Hyper Division was a consulting company for peer-to-peer stuff. So they're able to work with like crypto trading platforms and all that kind of thing to, to create like peer-to-peer systems that work within that company, because a peer-to-peer approach, you know, can actually be a lot more efficient.
And when you're talking about trading and different things like that, and you're talking about robustness and anti fragility, are these things come into play with peer-to-peer systems. So that can that continued the
investment in into the stack. Fast forward a little bit later and I decided I, I also did this thing with near form where or with the Open JS Foundation, the Linux Foundation, I've written the JavaScript Node application developer and JavaScript Node services developer certifications and exams. It started as a community project and all of that, but I'm the, the author and lead and it came to a point with a near form where it seemed like it was time to sort of step away but carry
on doing that stuff. So I stepped away, carried on doing that stuff and then went freelance for a bit and I just did some freelance sort of stuff with with, I work with Telus a little bit, the Canadian telecom, some other stuff. And then Mathias contacted me and I was just in between kind of gigs and he said, are you up to March? And I was like, no, I'm from there.
I went to Denmark and met this guy, our CTO called Andrew, Andrew Walsh. He's, he's from a sort of technology neuroscience background and is really the extremely good algorithmic kind of programmer in terms of the peer-to-peer algorithms and stuff, as is Matthias. But they, they bring different, different things, right.
So we started talking then, and there's one other guy, sorry, who wasn't there at the time, but this is the other guy that's evolved was Chris Dietrichson. He's like a guy from a chemistry background who found out that he really liked thinking through cryptographic algorithms. Yeah, so he's our cryptographer,
but he can also code. So he's a coding cryptographer, whereas Andrew is a coder who writes the the things on top of the cryptographic algorithms, like the, the sort of communication algorithms and a bunch of other stuff. He's the CTO. So anyway, we started talking about a micro kernel. That's what we called at the time, which is essentially what what pair became and and it's just a platform for a cross-platform, cross operating system platform for running pair applications or for running
peer-to-peer applications. We we didn't call it pair at that time. I love the name by the way, when it comes to peer and pair. Yeah, it's so good. It's so good. That was that was Paolo, our our Chief strategy officer. He's the CEO of Tether and a bunch of other stuff as well. He came up with that name, but if where was I? Sorry, micro kernels. Yeah, sorry. So no worries.
So we started talking about that and that's, that's, that's sort of where we sort of formed the the basis of what we were going to do with hole punch. And we hired our first front end guy JB, right in the early days. And what I did was, is I just wrapped electron in a command line thing where we before pair was called pair, it was called hole punch. And you just wrote hole punch dev. And then that just opened electron with like a file watcher reload thing. OK, so keep was start.
It was when we started building keep, we we built it with with with this hole punch dev command just like straighten electron. And so keep development could start in parallel. We had immediate utility. Well, and we could keep on building this micro kernel. So that's what we did and we and we kept on we kept on with it until we until we got to like multi platform with multi app sorry. So we made it so that pair can run like multiple applications
with a side car situation. When it started out, all it could do was run 11 peer-to-peer app, you know, But yeah, because The thing is, is like it's this really pair of runtime really is this very sort of it's, it's supposed to be as thin as possible sort of lateral, lateral piece of code. Because most of what we want to do is we want to keep all of the core functionality and everything outside of the core. We want that in ecosystem modules because we can iterate
those as fast as we like. You know what I mean? And like you've got Samberg control and all that kind of stuff is the normal sort of development process that people can use. And we want to keep the the core as small as possible because that minimizes breakage. We don't have any like codependency issues or anything like that, right? So you we could start build once we, once we could support the native primitives in Electron
for the P2P stuff. Then we could just use the modules that are already available as open source modules like Hypercore or Hyperdrive auto base and so forth to build the key app. And at the same time, the pair runtime platform is being built using those same modules to, to facilitate like say pair stage and pair seed and those things I mentioned earlier, right? All the, all the sort of dev X
and and and deployment features. And then once we were at a, a place where we were like, OK. When was this in time, by the way? 2000 twenties or oh? Man, I'm so bad at timelines. So we we went open source in Feb on February 14th of this year, this year as a Valentine's Day. That's when I saw the 1st. Valentine's pair, yeah. Nice. But we decided that we would go live I think roughly October the year before. And that's where we were like, we'd go live now.
But we'll, we'll do it really well, you know? So, so yeah, that's that's really how it all all all came to. Be it's almost been a year that it's live then also. Yeah, yeah, yeah. And and we've got AI think we've got a release coming out today. Actually, 1.40 should be going out there or at least this week, but that if you watch me, I'll often say we've got a release coming this week. And sometimes, sometimes it's
not. But yeah, we're releasing as like at the minute we've I've had a few like big releases because we've had to wait for something else. And so while you're waiting for something else, more things are happening. And so it's like chunking other release. But I want to get it to like smaller incremental releases as soon as we can. So after this one, probably trying to do more rapid
releases. Our release strategy is really cool as well actually, because when you stage to a drive, we also have this command called pair dump. So we have a development key that we just use internally and then we dump. So then I use that development key to verify everything's fine. So I run from that key and I can dump from that key into another folder, and then I stage that
folder. So then that fold is now my staging key, which is like what we use like for everyone in the organization, QA and everything checks that everything's working on the staging key. And then we dump from the staging key to the release candidate key. Same thing, same prices again, but the release candidate key is different. And this is something that we need to put into the Devex of pair we don't have.
Like a really easy way to do this right now, but the release candidate key is a multi sig key. And we've got this, we've got a Keat room that where the stakeholders are in the Keat room and you need three out of five signatures in order to sign off on a release on that key. Interesting. Sorry, this isn't that's sorry, that's the RC, not the RC key. The RC key is the one before the production key. The production key is multi six,
sorry. So we go dev key, stage key, RC key, release candidate gets gets the most rigor applied to it. And that's the one with the changelog and it's the final thing. And then that goes on to the production key. And then you have to have three out of five multi 6 signatures that again, this is all decentralized. It's all completely peer-to-peer. Yeah. And that's how we control gate production. Nice. Yeah. Was it there from the start or how did it grow to that
organically? Because that's quite an elaborate approach and I love the multi sig at the end as well. Yeah. So it started out with just my dev key, which we're using internally. And then we're like, we better put a staging key on there. And then we went, OK, well we probably need to do multi sig. And so then we had a production key and it's like, you know, let's put an RC key in the
middle. But The thing is, is like, because it's just peer-to-peer and there's no like infrastructure to configure or anything like that. You can just do that. Like you can just inject. It's your, it's your process. It's like whatever you want, bro, like whatever you want to do. So it's so flexible. And we, we might, we probably will evolve it more in, in future.
But the using the keep room to do the multi sig is, is a key thing here because there's this real symbiosis between keep and pear because you've got this communication layer and you've got and, and within that communication layer, you have identities and then you have this ability to author peer-to-peer apps, right? And that's brings me neatly to authoring with identity. So this is isn't a feature yet.
It's coming out soon. We're we're, we're waiting on a few things in the stack to be done. It's all about prioritisation. At the end, yeah. So when you create a key to identity, you can grant a sub identity to pair and if you do that pair will then sign your applications with that sub identity. So you can then release a peer-to-peer application and we can know the, it's Patrick in this key room, that guy, that identity is behind this application.
And now we have essentially a web of trust because when you run an app, even currently, if you try to run a pair app, it'll say, do you trust this application if you've never run it before? What we'll upgrade that to is saying do you trust the application or do you want to trust the author? If you trust the author, you'll run any of their applications. Yeah. Or we could have like a time that we could do many things
with that. So we're now at a place nearly where there is an interaction between kit and pair platform. But there's no reason you can't grant sub identities to other applications as well. Also from Kit, Yeah. Yeah. So what you then have is this like immutable identity, social space that isn't centralised the the data and the identity is still inherently valuable, but we don't have to centralise to to avail of that value. The ability for other apps to use key.
Imagine a gaming app, right? You use your key identity, you grant your key identity to the gaming app. And now that's who you are within that gaming app or a trading app. Like these things like, you know, it's varying levels of, of, of trust here. But like once you, once you know that a person is a person and, and that's represented to you by key and you can also verify we, we should actually make some command line tools to verify this at, at A, at a software level.
But cryptographically speaking, that is that person or someone using that person's device, which means that they're responsible for. It exactly. So you can't spoof this and that's that's going to be so powerful moving forward. And that kind of that really does open up the way for the commoditization of resources and compute as well. That's one of the essential components to that. I love it man.
I think this conversation especially, and I had a previous conversation with Adiwale Abbati and he's also in the Web 5 space, he talked a lot about identity. Like I've been on this kind of rabbit hole. So with the guests and you can see it probably. But I'm going to look forward in to read listening to this episode and diving deeper into this technology because I think it's fascinating. Before we round off, is there anything you still want to share with the audience listening?
One thing I was trying to say earlier that I forgot to say was about the TB transfer stuff. The other thing about it is, is if you don't have a centralized entity in the middle, then you don't have any incentivisation to compress the video feed. Oh yeah. So when you're when you're doing a live stream, Key also has this broadcast feature. I don't think we have video broadcast yet, but it's coming.
You could broadcast again to the entire planet if you wanted to again, hypothetically, but try and break it. Seriously, try, try and go big. You'll struggle to break it. It's really good, but you could, you could do like Twitch style kind of broadcast in the highest quality possible. Yes, because what we actually had to do was we we had to add a button to keep to reduce the quality of the video feed that
you're sending out. Because otherwise if everyone's and we set it on average quality, because if everyone's setting high quality, the CPU thrashes and so on and so forth, right. It's just too much for the devices. And we'll do more stuff around that and and a lot more things in terms of I want to we don't have recordings yet, right? That would be a really good feature to have. If I ever do remote podcast episodes again I'll definitely try it out.
Yeah, you want the recording button to be that. I mean, like if you don't have a recording button, you can just record your screen. But what we really want to do is take the original highest quality sources of every video stream, even if if people aren't doing highest quality, and then get those transferred to the peer that wants the wants the composite, but it it should composite them automatically.
And then you get this really high quality result of your stream, which again, prioritisation things to do. But yeah, so just to just to shout out like key isn't just a text messaging app. The and video and audio, they work on mobile, they work really well. Any bugs you see are probably going to be UI stuff and not the actual algorithm. And it's still a lot of iteration going on, UI and stabilization happening at the at the moment, like lots of investment in quality and stuff.
So it should be stable moving forward as well. Awesome man, awesome. Thank you so much for coming on. This has been a blast. Thank. You so much for having me. Thanks man, this was it was a lot of fun. I'm going to round it off here. I'll put all David's socials in the description below, as well as some useful links for you to read up on. Thanks again for listening and we'll see you in the next one.