#20 – John Woods, Alessandro Cappellato, Rob Moore: Insights on Algorand's Voyage into 2024 - podcast episode cover

#20 – John Woods, Alessandro Cappellato, Rob Moore: Insights on Algorand's Voyage into 2024

Jan 19, 20241 hr 17 minSeason 1Ep. 20
--:--
--:--
Listen in podcast apps:

Episode description

Welcome to the AwesomeAlgo Podcast!

In this episode, we explore the latest news in Algorand ecosystem and the AlgoKit project. Joining us are experts John Woods (CTO, Algorand Foundation), Rob Moore (CTO, MakerX), and Alessandro Cappellato (Head of Product, Algorand Foundation).

Highlights include:

- John Woods discussing Algorand's consensus updates and his vision for protocol decentralization.

- Rob Moore on the challenges and innovations in developing AlgoKit, focusing on its impact on developers.

- Alessandro Cappellato detailing AlgoKit's design philosophy, aimed at meeting developer needs for seamless integration.

The episode features a roundtable discussion on the future of Algorand, decentralized systems in software development, and Algorand's industry influence. Closing thoughts provide insights into blockchain's future and advice for aspiring engineers and startups in blockchain technology. If you are an Apple support engineer listening to this please help John with his feedback id - fb13463773 :-)

For more information:

- Algorand Foundation: https://www.algorand.foundation/

- Algorand Foundation's GitHub: https://github.com/algorandfoundation

- AlgoKit GitHub: https://github.com/algorandfoundation/algokit-cli

- AlgoKit Puya dev preview: https://www.youtube.com/watch?v=4i44Wd1R5Ao&t=253s

- Proof of State show with John, Alessandro, Rob, and Ryan: https://www.youtube.com/watch?v=yeXuThD4PUs&t=19s

MakerX:

- Website: https://www.makerx.com.au/

- Museum of Data History: https://museum.datahistory.org/

- Subtopia: https://subtopia.io/

Extra references:

- Rob Moore's talk at NDC Sydney '18: https://www.youtube.com/watch?v=rSY-zqDfc_s

- Awesome Algorand repository: https://github.com/aorumbayev/awesome-algorand

- Awesome Algo YouTube channel: https://www.youtube.com/@awesomealgo

Enjoy the episode, and thanks for tuning in!

Transcript

Hello dear listeners, welcome to a milestone episode of the Awesome Algo podcast, episode number 20. Today we are diving into the latest news in the Algorand ecosystem and the Algorkit project, hopefully sparkling a bit more technical details and delving a little bit deeper into some of the features that we think were quite impactful in terms of deliverables in the year of 2023 and obviously some things to get excited about this year and going onwards.

This episode is a bit of a milestone for me personally, not just in number but in the format as well, because this is the first time we are hosting a roundtable discussion with three distinguished guests who will offer us an insider look into the latest advancements and exciting developments in this space. First up we have John Woods, the city of Algorand Foundation. John joined Algorand in 2022, bringing a wealth of experience in blockchain technology and software architecture.

His impressive resume includes working for companies like Cardano, Informatica, Consensys, and known in the Algorand space as a leader who has deep commitments to enhancing the developer experience and integrating user-friendly programming languages such as Python into the blockchain technology powering Algorand. John is a passionate advocate of fork, decentralized, self-savaring, privacy-preserving technologies.

I am eager to hear his insights, especially on the recent Lambda updates that are revolutionizing transaction finality speeds. Next up we welcome Rob Moore, chief technology officer at MakerX, an Australian venture-focused digital product firm. Rob's expertise in leading teams and innovating in the tech space, particularly in mobile, web, ethical web-free and cloud development is truly inspiring.

He's been instrumental in steering MakerX explorations into the web-tree ecosystems, focusing on ethical and pragmatic approaches that do not abandon modern standards of software development and modern software engineering practices. As a colleague and a mentor, Rob's insights are invaluable to me personally and I'm sure they will be to you too. And last but certainly not least, we have Alessandro, head of product at Algorand Foundation.

With MBA in background specializing in digital finance, Alessandro has seamlessly transitioned into software engineering, combined technical and business skills to deliver innovative solutions. He is at the forefront of product development and strategy, playing a pivotal role in overseeing the product vision and roadmap on Algorand's global blockchain platform, with primarily focus for him right now being the AlgoCAD project.

His unique perspective on marrying engineering with digital finance promises to bring a fascinating dimension to our discussion. Today we'll cover a range of topics as I already mentioned, and with that, episode number 20 of the Awesome Algo podcast starts now. And with that, I would like to start with John and the first topic as I mentioned, some introductions obviously and recap on AlgoCAD and Algorand's technological advancements and there has been a lot of exciting news and changes.

So given your role as a CTO of Algorand Foundation, what do you think are the most significant technical milestones that Algorand aims to achieve in 2024? And how do you think these advancements enhance developer experience on Algorand? And perhaps push the whole industry, because I think we are in this very interesting stage where you have a lot of companies, right? There's like 200, 300,000 engineers in the world working in it. So it's still very minuscule area.

Obviously, there are market caps of billions, but it's not in there in the big leagues just yet, but a lot of companies are finally starting to realize, okay, there's too much competition. We all have same goals, right? We need to push towards standardizing. We need to push towards building better frameworks to work with legal aspects of applying this and improving the accessibility.

So I'm hoping that this is actually a sign that industry is finally getting to a stage where everything is maturing and people realizing that 100 L1s or 100 L2s are not going to make it if they will constantly continue pretty much competing with each other. I totally agree. Hello, everyone. Great to be here. Thanks for the opportunity to speak. And so in the year and a half that I've been working on Algorand, it's in the best shape I've ever seen.

The protocol is in great shape and we've got an incredible roadmap and I'm going to talk about some of those things. The developer experience, which is core to Algorand's DNA, is in the best shape it's ever been in, thanks to the work directly of Alizandro, who has been leading product for the foundation with a job-zine level of care, and MakerX, our technical partner, who have been implementing Alizandro's vision. And then finally, the ecosystem as well. It's on fire.

I'm seeing so much excitement from people. I'm seeing so many new projects brought to the platform. Our discords are electric. The forums are alive. And so, you know, for the, I guess, it's an all-time high in terms of excitement, in terms of positivity, and in terms of work in the last year and a half. So I'm in a happy mode. I'm feeling good. And so, let me walk you through some of the things that we're doing. So on the protocol side, you could look at the various things we're doing.

We came out with our roadmap yesterday, or I think two days ago, on the 17th or 18th. And so, all the days are blending into one. And so, our roadmap basically defines the things we're going to do in 2024 on the protocol side and on the developer experience side. But if you look at the protocol, I mean, there are certain parts of it that kind of dovetail well together or marry well together. For example, consensus incentives, which is, of course, this change.

We're bringing this tectonic shift that's going to come to Algorand where individuals get rewarded in Algo for producing blocks and validating transactions on the Algorand network. Essentially, the pure proof of stake version of mining, you will now get, you know, by the middle of this year, you'll get rewarded for that act. And so, this has two major side effects.

By rewarding people for validating transactions, running nodes and staking their Algo, we increase the number of nodes that are active on the network. And so, this decentralizes the network. It ensures that there's lots of data propagation path network because the nodes are all interconnected. So, it becomes this dense network of nodes. And it also critically secures the network.

So, the more Algorand it's staked, the more expensive it is and the more difficult it is to mount any kind of attack against the decentralized network. So, that's stage one with consensus incentives. And looking at another part of the roadmap that we're bringing this year, which is what we call the Capoblanca variation with our chess themed roadmap, we're going to be bringing peer to peer as a network propagation topology.

And so, what this means is that we're shifting away from the current standard, which is where participation nodes or consensus nodes, as I'm going to call them for the rest of the interview because I think a consensus node is just a simpler term because that's what they do. These nodes are running consensus. Right now, they're not connected to each other. They don't talk to each other.

Instead, they talk outwards to this kind of ring of relays, which propagate their work, the messages, the transactions, the proposals, these relays propagate that and the consensus nodes just talk to the relays. And so, what this means is that we have this kind of bunch of entities that are responsible for the decentralization of the network and they're controlled by a small number of entities. I don't know, 10, 15 different entities will run these relays.

And so, we want to improve the decentralization there. And so, with our release of the Capoblanca variation or peer to peer networking, we're going to make a change to the network topology so that consensus nodes can now start talking to each other directly. They don't have to go via this relay middleman. And so, this decentralizes the network. It decreases the reliance on relays and on the Algorand Foundation.

And it means that even without an Algorand Foundation, even without an Algorand Technologies slash Inc., we would still, or users will still have the ability to use Algorand without any authority or any oversight. And the blocks and the transactions, et cetera, could propagate directly from user to user or peer to peer. And so, this is hugely important. And so, you can see how these two things married together, right? Just like chocolate and orange, strawberries and ice cream.

What else do I like? Coffee and milk. You know, they... How many oranges, though, John? Four, the four oranges. The reason they married well so well together is because, first, we increased the number of nodes and the number of pathways. We increased the security of the network. Everyone's running these nodes. And then, boom, we drop peer to peer. And now, all of a sudden, all those nodes that have just sprung up, they're now talking to each other.

And we have this very dense data propagation pathway. So, super cool. And so, consensus incentivization. We have this chess-themed roadmap. We've given all of the major milestones a name. And so, consensus incentivization is Reti. And then, peer to peer is the Kappa Blanket variation. These are moves or opening moves in chess. And they all have some meaning, which you can read on our roadmap at the Algorand Duff Foundation website.

Did you have to go and research chess moves, John, to learn all of them? So, like, I learned chess from a great... A grand-uncle of mine one day on Christmas. Oh, wow. Yeah. And I'm terrible at it. But it's a beautiful game. I think I lost a little bit of love for it when I realized it could be brute-forceable, you know? It's like, there is a set path at any point.

And so, to me, that deterministic nature kind of ruins the game a little bit, whereas you may not have that with other games like poker and stuff like that. But like, of course, I respect these, like, hive-mind people who are able to just look at the board and think five-ply and like... And it's an incredible skill. And of course, it's kind of... So I think it's intellectual, but also still quite sexy. So the team, and it was nothing to do with me, they just asked, hey, do you like this?

And I'm like, yeah, I like it. So they came up with this idea. And then they went and selected the different moves and, you know, the relevance of the different moves. Like, for example, we're doing our non-archival relays and we're sacrificing archival relays. So they call it the Queen's Gambit, which is kind of like where you sacrifice a pawn. And so they came up with all this great stuff. And I think they've done a wonderful... It's a pretty impressive job, I think. Yeah, totally.

And so maybe, you know... So looking at consensus incentivization and peer-to-peer, I would class them as like decentralization and security. And then if you look at some of the other things we're doing on the protocol, as an example, dynamic roundtimes, that's not focusing on decentralization or security. That's focusing on improving the efficiency of the network and how fast blocks propagate. And so indeed, how many transactions per second we can have.

So I think, though, you know, Algorand was running well. We didn't have to focus on these things. It was fast enough. It's, you know, one of the fastest blockchains out there. It was decentralized, I think, enough, you know. Certainly there's different lenses. You can look at decentralization, whether it's the distribution of the token, the data propagation path, the actual active consensus and who controls the production of blocks and the minting of blocks.

So there's different lenses you can view. But I think functionally, if you look across, we were pretty good, but we didn't want to stop and just sit there on our laurels and rest and say, well, it's good enough. We wanted to strive for better. And by bringing incentivization to consensus, which is adithetical to Sylvio's original thought, you know, that consensus will be executed, you know, altruistically. But you know, empirically, over time, we've looked and we've not seen that happen.

So we're kind of saying, hey, okay, you know, that contention didn't hold in the real world. And so we're pivoting to make it better. Bringing peer to peer. We don't have to do that. It's functioning well, but we want to make it true to the value proposition of cryptocurrency, web three, decentralization, self-suffer. Do we want these things to be true? And so that's why we're focusing on that. So that's the protocol. I think it's in pretty great shape.

And in 2025, we've got some even cooler things coming in. I think we can discuss it a little bit later on. So now to this, to the, probably the most important thing for me, it's the developer experience. When I get a new MacBook and people know I like my MacBooks, although they're, Apple are pissing me off a little bit recently with some of the bugs that I'm finding and I reported them and they're not getting that. John, John, John. Yeah. Yeah. So hang on.

Let me just see if anyone's watching from Apple. Let me just see here. Give me one second. I was fine. I'm going to put that here. It's really annoying me. I'm just signing in. Just give me one second. Okay. If you can please look at feedback request FB13463773. It's driving me insane and I'm a big customer. I will make sure to include it in the description of the episode actually. So if it's the Apple support engineer, you can click on the, for more directly. And come to me directly.

If you're a support engineer at Apple and you like Algorand, I don't know, I'll reward you with some algo or something like that. We can do a deal. Just fix the book. So I'm a person to get my new MacBook. First thing I do is like I sit down and I set it up perfectly. I care about how my terminal looks. I care about the fonts. I care about whether it's using clear type. I care about installing power level, you know, 9K. I care about installing ZSH. I care about my package managers.

And I think to a lot of people, the developer experience is so core to their day to day, they feel this exact same way about how they develop and engineer things. And so people will have preferences. They have their Visual Studio Code set up. They have themes installed. They have Vim bindings installed. They want to work in a modern environment that feels good. This is people's bread and butter. It's like the, you know, if you're working on a construction site, you care about the tools you use.

You care about you have high quality hammers, high quality drills. You know, if you're driving a taxi, you want to make sure that the car you're driving for nine hours a day is comfortable. And so these are people's tools for work. And it's essential that we make it fun and as easy as possible and take the friction away. And so, you know, when I first started in Algorand, I was interviewing with Stacey. She said to me, what are the things you'd improve?

And the most obvious glaring thing, because prepping for the interview, I was like trying to, you know, just stick a contract on mainnet, see what the experience was like. And I was so confused. Reach Albo builder, Teal, PyTeal, what's the language is a Python? No, it's not. It's kind of like bindings. I'm not sure what's going on. What, what, what development environment do you use? I just, I know everyone uses visual studio code. What's the good plugins for that? No, there's not.

Is there any static analysis? No, there's not. Is there a command line tool like Truffle? No, there's not. Is there hard? It was like a mess. And so, you know, Alexander and I sat down like week one and we're looking at this and going, what are the things we need to do to fix the developer user experience? Because it's got to feel like that new MacBook experience, you know, maybe not perfect, but fun draws you in. You want to develop for it.

It gets out of the way, you know, that the tools get you to expressing your intent, your business logic and your idea. And so, AldoKit is that it's, it's, it's been a revelation. It's been probably the work I'm most proud of that, excuse me, that we've done that that I've done or that the CTO kind of function has done. And so, Alessandro has been the genius behind it, the vision behind it, the strategy behind it. And MakerX, like I mentioned earlier on, have been an incredible partner.

I've worked with so many vendors over the years through all the jobs I've had. I've worked at many outsourced, if you want to call it outsourced tech. I don't see them like that. It's when we work together, it's like they are, they might as well be members of the foundation. They work so closely with us and they care so much about the output of their, you know, their product.

But what we've built is so much more, you know, it's, it's, it's, it's revolutionized the experience in Algorand because Algorand at its heart is a platform for execution of code. The AVM is a virtual CPU. It's got instructions. You write code, we call the code smart contracts, the smart contracts run on, on the AVM.

And so, if, if, if Algorand has no, has no developers and doesn't have great apps and we don't make it easy, all it is is a network where people can send Algo back and forth to each other. So that's why it became so much smarter. Algorand, Algorand, Algorand's ResonDItra is to provide a platform for people to express business logic on the chain. And Algor kid has provided what I consider world class industry leading tooling for this.

And so you're in Visual Studio, it's a normal experience, it's a one button, compile and deploy. It's CI CD. It's command line tools to help you build, test and deploy. It's frameworks. It's languages that you know like Python and soon to be expanded. And so, the team I've managed to achieve opens the floodgates for everyone to build on Algorand, I couldn't be happier. I think it's probably the most important thing as an entire group that

we've done. Without this, the oxygen would have been sucked out of the room. We needed this. You can need a great protocol, sure, but if you don't have great dev tools, you're in big trouble. And we finally have great dev tools. And if I made some remarks here, basically, the thing is that it's also, I would say based on observations of, because for me, it was also rather a pivot to delve into blockchain and for most part of the journey, it was a process of demystifying

this industry a little bit for myself as well. And I do notice that there's quite a huge, I would say inflow of web developers, because there's obviously a lot of references and ties to Dapps, which in this case, mostly have a rather simplified business logic, because everything is executed on chain. Well, not everything, right, depends on the platform, the protocol, but mostly blockchain allows you to simplify a lot of, I would say, execution of the business

logic in this very small, verifiable chunks of logic that you execute. So it's mostly focused on the front end, hence a lot of web development. And if you look at the development industry in general, it's a bit controversial as well, right? There's a lot of complexity in tooling itself. Most of web developers spend more than, I would say, 50% of their time dealing with complexity of their own tools, rather than solving the complexity of the problems that they need to

achieve, right? And so it's extremely important, I think, that given that there is also dominance and the presence of people coming from web to essentially ensure that if they are to hop on things like Alucid and things that allow interaction with this, you have a great analogy for this actually, a decentralized operating system, right? It's a massive network that serves as a layer where you can execute and deploy your applications. And so I think it's important to ensure that the

tools that we provide to these people are essentially not going to get in front of them, right? It should work, it should be convenient, it should be, as you said, pretty much. And I think that's something that may be a great relief for a lot of web developers, right? You don't have to worry about NPM or some person on NPM removing a left pad package and just like halting the entire industry

relying on NPM. Things like that essentially is also, I would say, considerate to the Terence, Alucid, but to also expand on the other remark you did in regards to, I've also heard this quite often with the recent announcement of the new roadmap of mentioning that, well, obviously, it's a significant, I would say, change. It's one of the biggest changes we had so far,

the algorithm protocol, right? The change to incentivization, which is, I think, I might actually argue that this goes against, obviously, this goes against the initial Sylvia's assumption in regards to the altruism, but largely to be frank, it's, it's, one could say it's also a reflection of the current financial system, right? Altruism is not always spared nicely when we talk, when we talk about money.

But at the same time, I think it also still pretty much was in the spirit of Algorand, because another, I recall one of the points that Sylvia was actually doing in his very early talks is the ability for the consensus and the core protocol itself to be adaptable to changes in the industry and changes in the environment, basically. It's not like you have this, you know, fixed Bitcoin proof of work protocol, right? That is protected and guarded like a holy

grail. It's actually a dynamic system. It's a software system, basically, that adjusts to the environment. It adjusts to the demands in the ecosystem as well. And I think this goes largely in spirit, because we clearly have a change in the environment. And this is, I think, a response to, to, to, to adapt to this new environment and ensure that this incentivization is actually even further propagated. But thank you for the answer, John. I think this is great.

I would like to give the stage to Alessandro and delving a bit deeper into AlgorKit. Could you perhaps discuss some technical decisions behind one of its like most recent, I would say, feature developments and how it addresses a critical need within Algorand developer community? Yep. Also, hi, everyone. I'm Alessandro, head of product at the foundation. I lead the AlgorKit efforts. So our idea with AlgorKit, when we first started off, we, we wrote down a few key

principles. And the most important is meet developers where they are. Blockchain development isn't as easy as web development. You're in a bit more confined space with virtual machines that are extremely powerful, but they don't have those, all those layers of abstractions where, that you're used to in normal web dev or normal programming. So a lot of the work with, that we did with AlgorKit was to, and you can see this with the Puyah compiler or with, with the

generators approach that we took, are to remove tools from being a hurdle. If we want Algorand to grow, we need to enable developers to feel comfortable with the tooling they have. And the more hurdles you know, you'll have to face, the less big of a step you're willing to take as a first step. So with things, my favorite thing is the, my favorite feature on AlgorKit is decline generators, because smart contracts are the business logic. So that is what powers the

heartbeat of your application. But an application isn't only composed of a smart contract. There are plenty of things around that for it to be successful. And the translating smart contract requirements to middleware or front end, that was extremely difficult. And so with, with the work of MakerX team with work and help of their roles in the community, we came up with a standard. And all of a sudden from a smart contract, you can have a set of functions that are available

outside of your smart contracts to interact with it. This, in my opinion, is one of the things that our developer says, all right, if this easy, I can really focus on my business logic and make it as big a moon shots as it possibly can. Because then I have the client generator that does most of the work for me. And then I simply need to connect the dots. This was one of the proudest achievement and proudest release that we had last year. The other one is the Pouya compiler,

always thinking meat developers where they are. The only thing that is sure in this, in our ecosystem is the AVM. Everything else will, can and will change. So if you look at languages in normal development, they come and go with its trending. It's like, first everything was on the server, then everything was on the client. Now we're moving back to everything on the server. It was first was monoliths, then all cloud, everything's a lambda, then back to,

shifting back to monoliths. Industries are extremely trendy and biased towards trends. So there are a few things, there are a few languages that have been there through thick and thin. And one of them, and it's the one that is most taught in schools and everything. So it's the one that we could cast the net wide enough to have a gigantic double funnel is Python. So Python, if you don't know how to write Python, but you know how to speak English,

you're already 80% there. That is the key thing. It's extremely similar to pseudo code, lead code exercises. So that is what we are trying to achieve is bringing the simplicity of Python development to something that is extremely intense and difficult, such as smart contract programming. So these are these are the things that I really enjoy about what we've done with AlgoKit. Then we have also we do a lot of our development, it's feedback driven.

Because after all, we are building this for our community. And our community has the most important voice in shaping its requirements, because at the end of the day, they're the users, and we are building it for them. So the I'd say this is, we have seen our AlgoKit approach at the end of the day. And we have seen that in the last few years. Here's a huge blob of code. It already works. It's extremely well documented. But at first glance, you could say it might be a bit

overwhelming. We heard that feedback. That's why for AlgoKit 2.0, we are working the in its own way to a generator first approach, where you only where AlgoKit only generates the pieces that you want at in that time. And then as your developer journey and that development journey evolves, AlgoKit will generate a bit more code. So first, you only do the smart contracts, you

don't need a client right off the bat. You don't need a deployer right off the bat. So we're doing it, we're taking the approach of still giving you everything that you need, but more bite sized for it to be more, more easy going, more easy to start with, and less intense from, from a mid standpoint. And if I may expand on this, yeah, like this is quite an interesting, I would say,

logical continuation of what was initially intended with the templates, right? Because the initial experience was also highly, I would say customizable, but the premise there was actually, okay, so here is sort of people for people, for example, who are familiar with things like create react app or any of the template builders that you would usually get and develop is here is a set of pieces that we think is a good starting point for you. Go ahead and pretty much, you know,

it's up to you to do whatever you want with it. But there is some, I would say benefits to that approach, but certainly there's the realization that there's a lot more power to actually customize this experience by allowing it to treat them as as Lego blocks, right? Because you always start small, you always start with certain base requirements and then you keep expanding and functionality, but only functionality that you actually find and then need at a particular point

in time. And I think, yeah, was, and I guess for some engineers, they will realize that what Alessandro just said is a bit of a spoiler alert. But basically, I think people are going to be, are going to find this really useful. And thank you. Thank you for the wonderful answer, Alessandro. I would love to move to ask a question to Rob. Our second is you. So it's not just one, it's not just two and best CTO is the best we have we have like a kind of top trumps.

But as a as a CTO, what do you think are looking back at the set of the features that were delivered so far and all the work that went into the design of how we get. Are there any interesting engineering challenges that MakerX has faced when implementing those and what innovative solutions have you considered to overcome these? And I'm pretty sure we do have

quite a good chunk of example there. Yeah, for sure. And hi, everyone. So Rob Moore, co-founder and CTO of MakerX, we're a business that works with startups, corporates and venture builders to build ventures and digital products. And as has been said, we've been working very closely with John and Alessandro and the Elgrand Foundation to build out the algorithm experience. And it's certainly, I know John was saying it's one of the things he's probably the most proud of

in his career and same for myself because I'm immensely proud of what's been built. I think it's really cool. And yes, this is a really hard question to answer because pretty much every single thing that we've done over the last 12 months has been really interesting from a technical perspective. We've really been pushing the boundary of I guess what is possible and what has been done

in the Web3 industry. And so I could probably sit here all day talking through that. The obvious one that first comes to mind is probably the Poyer compiler because naturally building a compiler is incredibly large technical challenge and certainly doing it in the really small time frame that we built it in was very interesting. But I have a suspicion we'll probably touch on the Poyer compiler later on. So I'm actually going to pick something else or maybe two other things. So

the first one is probably the types deployment clients that Alessandro alluded to. This was a really fun challenge because if you think about it, you've got a smart contract in the metadata that describes a smart contract. And then the goal that we had, which was that someone interacting with that smart contract could write type, essentially type safe code in TypeScript or Python and have an idiomatic experience with that programming language to interact with that

contract. And that we would abstract away as much as possible all of the intricate Algorand protocol knowledge that you would otherwise have to have to be able to successfully issue transactions that call those methods and pass the right parameters and marshal them in the correct way from the type system of TypeScript and Python through to the underlying binary encoding that the AVM expects based on the smart contract code that you have written. And so that's quite an interesting

technical challenge. And I think in particular, the thing that was really exciting about it when we worked on it, and actually, this is one of the features that I personally did a bunch of work on it was really fun, is that the sky's the limit in terms of what do these clients that we're generating will like the code inside them and more importantly, the interface that they expose to

the developers that are calling them, like that could be literally anything. And so, you know, we were guided by those principles, I said it should be idiomatic, it should hide the Algorand stuff and it should be type safe. And it was just a lot of fun, I guess, to, you know, iterate and get towards an experience that I myself have used multiple times and I'm very grateful for that feature when I am using it because it makes the experience of calling these

smart contracts so much easier to do. And I understand the underlying protocol, you know, stuff and binary encoding, etc. quite deeply. But I don't want to be doing that day to day because that's the distraction from being productive as a developer to implement, say, a D app or a backend API or whatever it is that's abstracting this Algorand functionality. I don't want to be in that low level detail of binary encoding because it's not productive.

CTOs like Rob, I call them ivory terror CTOs, they don't want to get their hands dirty. Rob wants to be kind of standing on a mountain, just kind of like thinking, you know, you know what I'm saying, like not down the detail. Nice troll, John. Rob is probably one of the most detail-oriented and down and down and down in the real work CTOs I've ever met. So he's brilliant. I'm only teasing. I love it. You owe me a beer for that troll.

Anyway, so that was fun, right? It was fun and it was a big technical challenge, I guess, to whittle down the endless possibilities of, you know, what could a client experience look like down to what we ended up coming up with and making that idiomatic for each of them and including quite hardcore type inference things like when you call a method, it gives you the return

type of that method from the smart contract. You know, so if, for instance, you had a struct with multiple fields, you get a, you know, same type script and interface with those fields there and all of the encoding and decoding is taken care of for you. And in your programming code, you have this type safe return value. Like that's, it was hardcore to get that working. But the end result is basically magic. And as I said, it was a lot of fun. So the second thing

I thought would be interesting to talk about was the breakpoint debugging. And the reason why is because breakpoint debugging is something that like every developer generally would have some sort of experience of breakpoint debugging. It's a fairly foundational kind of software development skill, right? And the experience that you would have in traditional development and it could be anything like a web app and API, a console app, whatever, is that you attach a debugger to the

running process. And the running process stops at a certain point where you've set a breakpoint, and you can kind of move bit by bit through lines of code and hover over variables and see what they are, et cetera. It makes it so much easier to debug issues that happen in the code that

don't match your expectation when you programmed it. It's an incredibly productive kind of mechanism now the problem with the blockchain like Algorand, it's a decentralized kind of system, you can't pause Algorand and say, can you just stop executing the smart contract online free for a second there so that I can just see what's going on? You can't do that, right? It just fundamentally doesn't make

sense. And so the challenge that we had for that feature was how can we create an experience that feels intuitive for developers that have spent years and years and years learning how to do breakpoint debugging where you attach the process and stop the process and all of that, which we can't do with Algorand, but craft it in such a way between the library code and the

extension, Visual Studio, et cetera, that it feels like you've kind of done that. And where we landed frankly went beyond my expectations of what would be possible and is incredibly exciting. And I must call out the awesome work that Algorand Technologies did in creating the core debug adapter protocol, because they did some hardcore frankly and phenomenal work. And then we were able to build on top of that and integrating to Algor kit in this really intuitive like intuitive experience

that feels, you know, as close as possible to normal breakpoint debugging as you could get. So those two in particular, I think we're really interesting from a technical challenge perspective. Shout out to Jason Paul. One extra thing to that, which is like, people expect debugging because they're used to it from a traditional software engineering background, you get debugging in C, whatever, you know, you get

debugging in Rust, you get debugging in Swift. This stuff has to be built from the ground up. Like Rob just said, like we needed Algorand Technologies to add core things to the fundamental structure of the node and the tooling around the node. Then we needed Rob, Alessandro and the teams at MakerX and the foundation to work together to build up the entire debugging stack. And so it's like inventing fire for the first time. And so

this is a huge technical challenge. I know debugging might sound like a kind of a fundamental to some software engineers who may be listening, but we're doing this in a context where it's never been done before. And so I just blown away with the results. So yeah, I just want to say very well done on that because it's critical to a normal developer journey. But again, something that wasn't there a year ago. And also, as Rob mentioned, shout out to people at Inc

and for technologies for a lot of the work on the debug adapter protocol as well. And I guess, thank you to Big Enterprise is pushing for some of these standardization in terms of providing protocols to implement debug adapters. Because a great thing about this is that what we did with the Visual Studio extension is that it's actually relying on a language agnostic implementation that implies that it's actually fairly straightforward to extend it right now. It's

available on VS Code. But that core is actually could be extended to pretty much any IDE that supports the debug adapter, which is another important part because a lot of things that Rob was also mentioning in regards to the AlgoKit is obviously in line with some of the core. And as a developer, you can actually go on AlgoKit repository and look up the documentation. And there's the chapter on the principles. And there's roughly around eight

to nine principles on which AlgoKit operates. And I think modularity is extremely, extremely powerful and important concept that we are also trying to embed into a lot of the functionality and features on AlgoKit. And I definitely can agree with you more on this, Rob. I think that those were certainly very interesting technical challenges to tackle and solve. One thing that we do with AlgoKit is that we are championing VS Code, as it's the most common

text editor. But everything that VS Code does, you can do from the CLI. So we are just simply packaging it up a bit nicer for VS Code. But every piece of AlgoKit works, either you're in NeoVim, Emacs, Visual Studio, Sublime Text, if people still use it. So this is the important thing. It's catering to everyone, but giving a premier experience. And if you're going after Europe?

I was just going to say, you're basically describing two of the fundamental principles of AlgoKit that we've been very consistent in applying, one of which is meet developers where they are. So don't build something that only works if you say, on a Mac in this specific environment, blah, blah, blah, it's like, we're trying to build for whatever environment developers

have so that they can be productive where they used to be productive. And secondly, it's the modularity principle, which is every single thing that we do for AlgoKit is built up of multiple building blocks. And we design every single building block to be able to be used independently so that if a developer wants to opt out of certain things or craft their own experience by pulling some of the building blocks together in a different way, they're

totally open to do that. And that's like a deliberate thing that we've designed in. And I just want to say, if you take yourself seriously as an engineer, you should be in VIM and you should be in the command line. And I just want to say my VIM or C, I'll just check my VIM or C, right? My configuration for VIM is 848 lines. Can you imagine what that's like? How many years have you been crafting that, John? Decades. But actually, I use this. There's a great VIM or VIM or C bootstrap website,

and it kind of gives you the basics, but VIM is so powerful. So is Emacs, of course. And so command line bros for the win. I like that you sidestep the war of VIM versus Emacs and just send both of them are really good there. So you haven't pissed off half the audience. It's like, yeah, yeah. It's like tabs versus spaces. By the way, the answer is spaces. And what you should do is bind your tab to four spaces,

and then you won't have any problems. And then everything will be fine when you cat files. Exactly. Exactly. And by the way, if there's any other engineering kind of like serious questions you need to answer, we can do it now. We've done spaces and tabs. We've done Emacs VIM. What else is there? Algorithms or other blokechains? Algorithms. Unless you want privacy, in which case you can use Monero. Can I think anything else? Linux versus Mac?

Linux. Linux, I think, if I had to pick one to die, I'd have to pick Apple to die, because it's more important for computing that Linux exists. But on a day to day, the MacBook is probably the best portable computer I've ever owned, the current generation Apple Silicon. Yeah. And to, I guess, expand on the next topic, which is hopefully going to delve a little bit deeper into the algorithm. And given that, John, you already provided a great,

you know, outline of higher level deliverables and goals for this year. Is there anything that you would say makes you the most excited in regards to the upcoming algorithm features? Right? If we look at the current stack, right, we are, I guess, we are calling the 2023 as a V2, right, prior to that was establishing the fundamentals of the Alucid CLI and things like that. So we have already have a huge bundle, like, and actually the listeners of the same

podcast can go back in time and listen for episode number one or two. And essentially seed the discussions and problems the engineers and smart-coded developers used to have back in then back then it was actually like you couldn't dream about an ability to execute a single command with the chance essentially sets up end to end tests unit tests, it sets up, you know, static analysis, it sets up conventions, CI, CD that already has the deployment into things like

Netlify or VersaL. So and all of that just out of the box. And you pretty much can just focus on the actual problem rather than dealing with all of these tooling yet again, going back to that point with simplifying the complexity of the tools tool is should essentially never be in front of you, it should be beside you in some sense. But we have all of these great things in the baggage and obviously a lot of improvements are already being in works on each individual track

of the features that are already available for the users. But what would you say are these sort of most exciting things that you think are the logical continuation of what's already available in the algorithm ecosystem? Yeah, and I just, you know, you just went through, it just reminded me of so much of the stuff that's been accomplished there in that preamble. And I just want to say, I'm so grateful that I met Alizandro, I hired someone on to the team who was able to actually do this.

Like he's created this vision without any instruction. I mean, it didn't sound like I gave him a whole load of requirements and said, look, here's, here's, here's it in a nutshell. He's just incepted this stuff. And so, and with the incredible, hyper competent team at MakerX for the on the implementation, I'm just lucky that I met these people.

Okay, so I'll keep this short because I spent a lot of time talking on the previous one. I'm most looking forward to Python getting to production level, which is mid 2024 or sorry, early 2024, like March, something like that. And so that'll be production, you can use it in your code day to day. And it just makes building and maintenance so much cheaper. You just have to hire a Python

person, you don't have to hire like a TL expert. And it's also going to make it fun and super inclusive for the whole world from from teenagers in school all the way to to quantitative analysts and in FX options brokerages. That's number one, number two, moving that line by line debugging that Rob talked about to support things like Python. I mean, it's a hope. It's not an absolute certainty, but that's what one of the goals is for this year. And you'll be able to stay in

Python as you debug. Super amazing. And then yeah, I think they're there are my two most favorite, I guess as well, reusing that incredible pipeline that LLVM style, efficient pipeline for compilation that we built or the team have built for Python, reusing that maybe to screw off the the front end and put a different language on. I think that'd be very exciting if we can do that. And I think that's part of this trip. So yeah, that's it. And another one, Nvidia versus AMD.

The answer is an Ortiex 4090. And so if you bought an AMD card, I'm afraid you've made the wrong choice. But but they make very good CPUs. And so I'm a person who uses the 7800 X 3D over the i9 1390 and 900 K or whatever. So AMD for the CPU and video for the graphics card. On point. And controversial, I like it. And to, I guess, expand on this algorithm, evolution, obviously,

it's an open source project, right, which has its own benefits and challenges. And one might actually be curious about how how would one approach measuring, you know, key performance indicators for these things, how would you monitor the reception of features get and ensure that it's actually, you know, developer friendly and robust and navigating in an open source environment when dealing with the obligations of metrics such as that is sometimes could, you know, challenge

and require very interesting and I would say very interesting solutions, basically, in order to achieve that. So on that point, Alessandro, anything that you can perhaps expand on in regards to how you could also try to ensure that it's always on point in regards to what's the actual demands from the users. Yeah, so we have first and foremost, it's tracking is extremely difficult in CLIs. You're not on the web where there is 1000 different types of trackers, cookies and

everything and you can know where your user shops and everything. So on the CLI and neither would I want it to be honest, it's an address, his browser, his national average, everything, everything is right. It's invasive and CLIs and tools shouldn't be tracked to that level. They should just simply be tools that someone uses. So thankfully, with the approach, with the benefits of the modular approach that we set off to do since the beginning, we have split into different libraries,

all the different steps of the algorithm development experience. And so we can compare simple CLI downloads and this would give us a ballpark figure. Then we have client generators when they're pulled. This is an extremely useful criteria to measure the usage of it. And we can see usage growing week after week with the, with simply the download analytics that come. Of course, they're not precise because there's no telemetry, never will be. I am here on record to say it.

We will do everything in our, and we look at all these kind of things, all of the small pieces that compose our Gokit are self-standing packages. So we can see what is the usage of all those and when it's pulled in versus when it's not. That is one way we keep track and measure. The most important part though is the first line of feedback. Our DevRel team is extremely helpful, both to the core team, as because we first develop things in an MVP state.

DevRel gives us the initial feedback. Once that is incorporated, we go out and we go out for the entire people to look at it as a main release. People could well build from branches. It's all in the open. There is no hidden things. But the DevRels are a key part of this experience. Then, of course, we monitor our Discord channels are extremely active. We are always there monitoring it. We get up issues when they arise. They are triage-stakes really quickly. We also run quite

some bootcamps. For each new bootcamp, it's a fresh new batch of new developers, because it's usually beginner bootcamps. We can see and we can gather feedback from more experienced users on Discord and GitHub. But on the bootcamp side, we have the raw green to Algorand development feedback. That is important. That is treading the balance between experts and new developers. We should cater to both, because it's just simply a tool. A tool just works.

We see a lot of things that maybe we should tackle either with helper libraries or new functions in the U2s libraries from an expert developer standpoint. There are things in the documentation or how we approach simply even the information given within the CLI. That we get most from the beginner side, because they are least biased. They don't have a history without it. It's cleaner and more unbiased. If I may just slightly expand on this. I actually quite recently was thinking about what

are the origins of the term DevRel? Where does this came from? What kicked off this trend in the industry in some sense? I was very actually surprised and happy to learn that if you look at just general inception of this notion of a DevRel engineer, in general, there's a lot of companies nowadays that are actually misusing that particular role, which is DevRel is not just organizing conferences or having tech tools. DevRel is actually a very multifaceted and very nuanced

role or profession where you actually create the environment in which it's friendly. That's one of the reasons why I actually got into Algorand, because I've met a few DevRels and the response and the support and just the inclusiveness of the ecosystem was unparalleled to anything I've experienced so far in terms of just if you compare that with say opening an issue on an open source repository where someone responds in a year because someone tipped in $5 on

Tiffany Button on the markdown. That's one. But when you just tells you know, maybe you can implement it. Yeah, I'll be happy if you implemented yourself and I'll just approve it or something.

We get and then you get a dedicated ecosystem at this core channel where you come in, you ask a question, you get a tailored personalized response and you get this very friendly attitude basically that sort of makes you want to build, makes you want to expand your knowledge base and the fact that it's all on this very interesting, very decentralized and very scalable ecosystem that

literally can scale to billions of people. And I think that's a very important aspect to highlight and thank you for mentioning DevRels as well and shout out to all DevRels that are working at the foundation there. The folks are doing a really great job there. But to expand on this, to continue with Rop as well in building aggregate features. Are there any particular use cases? And once again, I'm also trying to be cautious of obviously your time and how hard it is to find a matching slot

spanning across two continents. Right now we have Europe and Australia on the call. But can you share maybe some use cases where technical ingenuity directly translated into delivering a lot of value for our good.

Yeah. I think there's one clear example that really truly stands out for me and that's it's around the Puerh compiler and more specifically something that John talked about before, which was the fact that we can actually add different front-end languages to it and basically reuse the engineering effort and the thinking and the optimizations and all the other awesome things that have been built into the compiler, but with a different language without having to

rewrite an entire compiler and team. And I think the origin of that was, look, honestly, when we first dived into doing what turned into Puerh, we didn't really know what none of us knew what it was going to look like. And so we dived in Eyes Wide Open, trying not to remove any options. And the team did a lot of research and were reading a lot of pretty hardcore academic journal articles from people that write compilers for a living.

A lot of them made my eyes glaze over when I saw them. But yeah, the team did a great job in figuring out what does it take to build a really good compiler. And one of the things that came out pretty early was this concept of having different layers, essentially, or stages that you go through in essentially a pipeline. And whilst on the surface, that might seem like that makes it more complex because there's probably more code and you've got to have all of these intermediate

representations that you kind of translate between for each layer. What it means is that each layer in its own right is a hell of a lot simpler because it's doing one thing and doing one thing well, and also able to use sort of proven algorithms that have been proven for decades, in some cases, in that particular layer, because that algorithm is really great at doing that thing.

And once we'd realized that, it became pretty clear to us that the first step after we pass the Python code is to turn it into an intermediate representation that essentially represents a high level language. And as soon as we had that on our architecture diagram, we were like, oh, we just realized that everything from that point on could be reused and you can

plug another language in the front. And so from that moment, that kind of, you know, ingenuity, as you said, kind of made it clear to us that what we were building here wasn't just a Python compiler, it was an AVM compiler that happened to start with Python as the language in front of it.

I think this is definitely a great example of a rather creative pivot because if you look into the original plans, right, there was a lot of fragmentation in the ecosystem and it was essentially initially assumed that maybe all of that could be solved by the direct transpilation, right? But there's so so much more power and thought that ended up going into delivering the compiler. And I'm very excited to look forward for the productization of the language and the compiler itself as well.

Going forward towards the closing, I guess forward looking insights, I have a a slight pivot of myself actually for John. I wanted to ask a slightly, I guess, open-ended question to you and I think you might actually enjoy it quite a lot. But when we look into like this notion of layers, right, we have layer two on Ethereum, our grand is a layer one, but it's always important to look back and realize that, well, what's layer zero, right? Layer zero is our internet infrastructure and none of that

would be possible without that. And obviously, when we look into internet, we have organizations like ICANN, we have the V3C consortium, right, the guys who develop basically the most fundamental, you could call them like atoms in physics, right, the most fundamental building blocks that essentially allow this massive global infrastructure that essentially is already being considered like, like it's an internet access, for example, being considered as a

fundamental human right, like it's so embedded into human society and pushes us so much further with that. So when it comes to V3C, they also are very interested and keen on decentralization, but they do have a slightly different and more conservative look into it. And you could actually understand their point, for example, creator of World Wide Web Tim Berners-Lee, who actually also was from, I believe, started at MIT with this project called SOLID and then later on ventured

off and created this company called INRUPT. And that's a lot more focused on linked data and semantic web. And it's yet, I would say, another angle of looking at how you can decentralize communication. And what he did with SOLID, and he, by the way, is a big proponent of going against the term Web3, which again, you could also understand his reasons, right? Because I would say Web3 was largely sort of borrowed in some sense by people in the blockchain community,

while the guy is actually standing at the roots of the original notion of it. But basically, his premise with things like SOLID, for example, is introduction of new core sort of V3C RFCs that define new protocols that built upon established web protocols that allow you to

further decouple data ownership from the applications. So that instead of, say, having Facebook storing all your images and Twitter storing, like storing all of your images, you say, okay, here's my folder with images, Facebook, you can access 10 images from here, Twitter, you can access 15 images from here, you can't really do anything else with that. And here I didn't information that you can apply on them, you can run them through AI algorithms for ads or

whatever. And so his whole fixation right now is on decoupling ownership of data, which is, I think, is also a very interesting way of thinking about decentralization. And his main argument against blockchains is actually, and I'm quoting this from the CNBC. Sorry, just a second. So blockchain protocols may be good for some things, but they're not good for solid. A web decentralization project led by Tim Berners-Leach has said, they're too slow,

too expensive, and too public. Personal data stores have to be fast, cheap, and private. And I obviously don't want to sort of go on the argument here, but there's a fair bit of generalization right when it comes to, and it's really hard to keep up with the things that are going in the blockchain industry. And to that extent, obviously, you can take big protocols that are slow and

expensive and use proof of work still, it's 2024. But at the same time, when we look at things like Algorand, which is, I think, takes a lot of boxes and what he considers this is an important argument

for prerequisites for having something like that done in proper decentralized matter. What are your sort of opinions on the ways how blockchain industry in general going to increase its interoperability with more sort of conservative and I guess more core layer zero sort of V3CRFCs and specs that they're introducing and that they consider it through decentralization. Sure. Yeah, so this is a huge topic. And so I'll try to be like a 30 second answer on it. But like,

I also care deeply about this. And my eyes were opened at CFC. I was on a panel with Yuval from Kanton Chain. And like he was telling me, they built Damol, right, this kind of Markup language it's actually like an abstract syntax tree for like financial instruments. And you can model anything from a structured product to a butterfly to like a whole bunch of like exotic things, like bonds, knock ins, knock outs, all in this kind of like tree. And so he made the point to me and

I think he's absolutely right. Interoperability really only gets there when an asset, when a bond is represented on an Algorand can be transferred to Ethereum without wrapping, without a translation where we have a standard baselayer representation of financial instruments where we have a standard baselayer representation of NFTs. Of course, on Ethereum, it's ERC721, ERC20 on Algorand,

it's ARKTHIS, ARKTHAT, ARK19, whatever. Like that stuff is fine. And you're representing the same abstraction or you have an abstraction representing the same kind of fundamental concept, but like it's not portable. And so I do think that we need like IEEE, ISO level kind of level standards that just kind of, you know, the way same way all Wi-Fi access points use the same radio frequencies and all like, like, you know, quick or HTTPS uses the same, it doesn't matter what

browser you're using, they all talk the same way. And so that's where I'd love to see it. I think things like the DREC Alliance, like, you know, non-partisan, non-biased technologies like that that Hedera and Algorand are doing together for key recovery, that's a first step into this kind of approach. But I would love to see leaders from or representatives or stewards from the various projects, all have a place that they can come to set standards globally, because sure, Ethereum's the

leader in a lot of this stuff. And most of the other chains have kind of taken a derivative of what they've done. But I would just love to see it more standardized, because that's where we get real interoperability. And by the way, congratulations was the great cooperation initiative was Hedera on the DREC, which I think is one of the most fundamental sort of issues in terms of

accessibility. Alessandro, is there anything you could, I guess, as a set of closing thoughts, you can share in regards to the house Algor kit is basically being designed to accommodate future advancement. Because right, what we're doing here is obviously targeting the user feedback and the needs and demands in the ecosystem. But at the same time, there's a lot of cool things that are happening and other chains, right? Everyone is working really hard in this particular

industry. And there's a lot of innovation happening. What would you say? What would you say is the mechanism by which Algor kit is essentially trying to be designed such that it can easily accommodate certain periods like that? Yeah, we've touched it throughout a bit, the entirety of this episode, the modularity aspect of our look it and whatever we ship, we try it to be as modular and self standing as possible case in point, the Puyah compiler,

the only certain thing at the moment in Algor and is the ATM. So languages come and go. We have Python for now, we'll add another language next year. But then if, and if all of a sudden zig brand new language gets into mainstream and everyone wants to develop the big, they'll have our integrations in commit histories and documentations that will do throughout the year on how to port a language to the Puyah compiler, they'll have that as a as a guiding

principle. So the Puyah compiler stays there, but everything around it could change. And the same thing goes for Algor kit. At the moment, we are as a front end, we are championing react, because this looks like the industry standard. But client generators, which is the core thing that we have done. Those can be can be used everywhere with JavaScript front ends, or even with Python or JavaScript back ends. This is the key thing, we are building solid foundations.

So that then everything that sits on top of it can change fast and can change easily. So this is, this is why with this year that we are not only building in open source, like we've always did, but we are building in the open. So all our developers and all our community members can see our whole process, how we go through tasks, scope them out, the ADR is behind them. They can see why and how we take certain decisions, and they can be a part of it,

because we are only as good as all the information that we can have. So the more information we have, the better and the better choices we can make. So I'm looking forward to a year of building in the open, building with the community, even more than ever, and building things that are not opinionated and can therefore serve and cater to the community as long as possible.

I can't agree with you more on that point. And historically, this podcast would always have the closing remarks by asking the guests on their opinions on sort of and advices for aspiring software engineers, but I think there has been 19 advices so far in the previous episodes. And this time, given the very special format, I wanted to ask the final question to Rob was a bit more, I guess, target towards a different audience. And this

audience, in this case, I'm talking about this. There's a lot of, I would say, small to medium size tech startups and businesses out there, not necessarily within blockchain, right? And they are all looking for ways to solve unique problems. And sometimes those unique problems are solved

with very unique distributed systems and technologies. And basically, if you are assuming them as a target audience, you know, from from their startup perspective, what technological trends should businesses be prepared to leverage and adopt if they're considering solving a problem with blockchain as the main key is a main tool to resolve this? And how do you think

Algukit and Algorantin in a bigger scope can facilitate this? Yeah, so I do spend a lot of time talking to startups and founders naturally that, you know, the key kind of target that we work with. And what I'd actually say, maybe controversially, is that if the starting point is how can I use this tech, they're probably not actually starting at the right point, because I generally say to people, don't start with the tech, start with the business problem

that you're solving, and what value means to that business problem. And, you know, as part of that, what is going to be a sustainable business model? Because it's a hyper competitive world. And if you don't have a sustainable business model, then you're going to go out of business. There's only

so many funding rounds that you can do from VCs, etc. Right. So I guess step one from my perspective then is, you know, once you have a, you know, this deep understanding of your business model and what value means, that's when you can start looking at what technology can potentially solve that.

And then that's the point where you can say, great, now that we know what you're trying to solve, is there unique ways that we can use technology, be it things like AI, which is obviously very big at the moment, or blockchain, or whatever other, you know, IoT, or whatever, there's so many technologies out there, potentially a combination of them that can help to solve business problems and create business models that otherwise wouldn't be possible, or they can make them more efficient,

or they can make them cheaper, or, you know, something or more secure, or whatever it is, right. And what I would then say is great, once you've figured that out, now let's figure out if blockchain is a good solution. And for that, the mental model I generally take is, there are certain architectural properties of blockchains, particularly one as, as John, as so eloquently put so many times fit for purpose as our grand, that, you know, help you

solve problems in a more, in a better or more efficient manner. And there's three categories of architectural properties, there's trust security and tradeability. So for trust, we're talking about things like business problems where immutability or public inspectability or

transparency are of value, right, to that business interaction. From a security perspective, it might be things like multi-party atomicity of some sort of transaction, which doesn't have to be a financial transaction, it could be any kind of business transaction or exchange of information or data or whatever it might be, or things like cryptographic verifiability, or where you're trying to prove a fact that something happened at some point, or, and in particular, that that's tamper

proof. There's obviously a number of interesting privacy kind of things that you can do naturally where you're not storing the actual data that needs to be private on a public blockchain. And certainly, if you want to have like enforced auditable history from a security perspective, that's a, that's really easy to do with blockchain. And then from a tradeability perspective, the

interesting thing about blockchain is it transcends borders. So whilst you still need to be regulatory compliant, things like global tradeability, proof of ownership, automation of business, processes based on ownership or trading of data or informational assets or whatever it might be, and certainly encoding like process efficiency, particularly using things like smart contracts where you can encode parts of the process in a, you know, a trustless manner into, you know, this

thing without needing humans to necessarily do things that in traditional kind of mechanisms, these are the things that blockchain technology and particularly a fit for purpose blockchain like Algran make really easy compared to what it would otherwise be. And then maybe to circle all the way back around, if you then look at something like Algor kit in the context of Algor and Algor

kit is something that makes it way easier. Once you realize that blockchain is a great answer to build the solutions that use that technology with traditional engineering skills without having to have a real deep, deep understanding of this very different kind of technology ecosystem and the low level protocol. So yeah, that's that's kind of my thoughts on that. Thank you. Thank you

for the great response. I think this is definitely a list of very important considerations to to keeping your head when you're once you obviously decide that it's indeed, you know, blockchain that is going to help me. But I absolutely agree with the point that the shouldn't start with speaking the technology, it should always start with speaking the actual problem to solve. How

can you reach a product market and all of those things but exactly. Yeah. On that note, I think we had very amazing and detailed recap of a lot of different areas and news and announcements in the Algorand world. And hopefully did a slightly deeper dive into Algor kit and looking forward to potentially have a dedicated episode on Puyah, but closer to the stages when it's going to be a bit

more. I would assume fit for production or prod. But other than that, once again, thank you very much for I know how hard it is to once again to make, you know, to organize something like this and make sure that everyone is comfortable and their time slot is available and things like that. So thank you very much for joining. It's an incredible privilege and pleasure to chat with you. And I hope the listeners are going to enjoy this episode and have a lot more insights behind what goes into

Algor kit. Thank you. It's been awesome. Thanks buddy.

Transcript source: Provided by creator in RSS feed: download file