Hello and welcome to episode number 18 of the Awesome Algo podcast. Today we have a very special guest, MG. He is a co-founder of a platform and a protocol called Go Plausible, previously known as Algorand Proof of Attendance Protocol. Yeah, the one that was previously named.
Yeah. And essentially we will try to do a high-level overview of the platform's architecture, its main features, what distinguishes it from the other competition in the Algorand space and perhaps some major comparisons with bigger protocols that provide similar capabilities. And with that, thanks again for coming to the show, MG. The stage is yours. I would really love us to start, as it's given with most of the guests on the show, with your background.
And if we could perhaps start with, can you share a little bit about your early years getting into computer science? What sparked your interest in engineering and computers in the first place? First of all, hey everyone, hey Algorand, hey Algo fam, everybody. Thank you so much Al for inviting me on. Here's a great honor for me. Very simply, I just started, I've been more than 25 years on the field for the computer, whatever role, because it was a lot of variations.
I started with network and security, got drawn to software design and architecture. And since there's a point where everyone involved on that level to somehow recognize for some tasks when you want something done, it's better that you do it yourself, at least do the first version of it yourself. So I got drawn into more serious developments and learned some programming languages and got started with that.
And I've been a little bit around eight years focusing on developing solutions myself with the help of my wife, who is actually the co-founder of Go Plausible, 15 years happily married. And since the first, actually weeks or so, we started to co-developing with each other. She's focused on front-end solution engineering and I'm more focused on backend APIs, data and anything related to smart contracts recently because we are actually on the field of ELTs. So this is a brief history of me.
And very simply, I just, I would like to think, I don't care about the programming languages, but more care about the data structures and algorithms. Once they're there, once you understand them, it doesn't matter which language you use, it's just a matter of syntax. And syntax is very, very related to abundance and actually the nurturing nature of availability for information here and there, first on the internet and secondly now during the age of AI and it's easily there.
So syntax is not a huge problem. If you get dedicated to it. So I would like to think that the most important part for anyone involved in computer science is the problem solving. So once you learn how-to's regarding those algorithms and data structures and learn some kind of problem solving mindset, I think you're set to go. I absolutely agree with your point and thank you. Thank you with this wonderful introduction.
I guess I'll spare you from the question of asking what was your first ever programming language you had experience with, but if you want to mention it, it would be interesting as well. Yeah, you will laugh at me, but the first programming language, what was actually, it was not a programming language, more like a scripting language.
I started with R. My first serious assignment, and it was a very heavy project, categorization of tons of actually scientific papers written in order to finding duplications, counterfeiting, copying, and all those sort of things. And I just needed to do that with R somehow because it was very restricted. So I started with R and just continued to learn my way into, for a small portion of time, I used.NET for a very large banking project, one of the most serious ones I've had so far.
Actually, the first risk management system that was implemented in my country's banking system. So after that, I just immediately found out at the time, because it was the.NET version too, the limitations of.NET and migrated into Java for, I think, five or six years. And right after that, because of something, I just left Java and never touched that back again and just came into this new paradigm, as I told you, to being floated right among and not caring about the programming language.
But most focused, I very much love to use Python, JavaScript, TypeScript, and ROS. These are the languages I'm most comfortable with currently. But again, it doesn't somehow not make a difference. If something is needed to be learned in order to provide some functionality or something that you cannot find in other languages, so be it. You need to learn it. There's no way to do it.
I must say though that R is, in certain scenarios, is pretty great for data visualization and especially for dealing with data transformation, like the Deployer framework and the way they have this plugin system, very convenient for data scientists to do. The plugins are awesome.
In that ecosystem, you will find everything and the coexistence of the C and C++ programming languages, native usage of those languages, made it a vast plugin ecosystem for R that you can use for fast computation or whatever tasks you have in mind. So yeah, I agree, totally agree. One of my favorites. If any data science enthusiast is listening to this and is interested in making some cool drafts and charts for Algorand statistics, probably check out R and the R studio and the Deployer.
A lot of things can be simplified there. And I guess maybe proceeding a bit further in terms of biography, what was your... And before I ask about Algorand, obviously, co-plausible is built on Algorand. There's going to be an interesting set of questions we can cover there.
But before that, if we are to touch a little bit broader in terms of just the world scope of decentralized systems, fault-tolerant systems, prior to Algorand, what would you say was one of the first things that made you interested in this particular domain and what appealed to you the most about them?
Actually, my first encounter with similar technologies and more like cryptography or DL, some sort of distributed ledger technologies, goes far back to a project I was... I somewhat submitted to on an Innocentive website, if you're familiar with. It was an old website that was using to submit great challenges as ideas and invite people in order to provide solutions, some kind of bounty system.
And there was a project called VOTEM, and that project was some sort of creating a voting system based on distributed ledger technologies, cryptography, and so on and so forth. So my first encounter was for that project, I was one of the submissions and I actually made it from the first round to the second round of the 100 elected solutions. So that was my first encounter with similar... With this domain of technology. But I didn't touch that again for years after.
And some of my close friends, dear friends, were going to start a company working on some NFT projects on some blockchain. They ran an R&D, came up with some initial candidates. The most prominent one was Algorand and just asked me if I am interested to collaborate with them. So that was my first encounter. I ran an R&D about Algorand, tried to read the papers here and there, whatever I could somehow take my hands on.
And right after that, based on two initial properties of Algorand, that was love at first sight. Number one was performance and number two was composability. These two for me was the first two properties of Algorand. So definitive that for me, it was totally convincing right at that moment. And I just decided because I was in the middle of some changing domains phase in my personal life and professional life as well.
So it was instantly the best candidate that I could count on in order to create or being part of the future of the economy as it is getting shaped by these new technologies. So for me, it was that. And when you speak of composability, if you could just expand a little bit on that, I guess, aside from maybe core properties of the consensus itself, you're probably referring to the architecture as a whole in this case. Exactly.
And different building elements available in there because let me just rewind a little bit back. I'm a big fan of Lego and Lego architecture and Lego way of thinking. So because since I somehow opened my eyes, there were the dominant toys around me. I just got into them since a very, very early age. And that gave me some kind of mindset around that. So even I use it in my professional life and I'm going to or trying to build something regarding the architecture.
I always think of it as a Lego structure. So we're starting by very distinctive, very raw building blocks and trying to get to that whole architecture as of a structure, not as of a big monopole or monolith thing. So Algorand was actually the only blockchain that had all of these basic Lego elements, different shapes that are needed in order for you to be able to create a bigger thing or bigger structure.
So totally suitable for me and my way of looking at things using all of those elements, the standard assets, smart contracts, atomic transactions, everything. They just help that because they're very raw, very basic and very essential and elemental in their own way. But they can compose with each other and create more complex structures. This is what you need. This is organic.
So whatever that creates a very complex organic structure with very basic fundamental elements is what I fell for and I love to work with. Awesome. Yeah, I see. So you're, I guess, in this case also referring to the L1 capabilities of the chain and the way I guess particular features are basically designed in a manner that are essentially not overlapping with each other. As you said, you can use them as building blocks for something very complex.
And by introduction of contract to contract calls and ADIs, it just got to a whole new level and it just got unleashed. It's just waiting to be explored more and more because the systems, yes, the financial systems, the economic systems, the payment system, everything that are building, some are getting built right now, they are not as complex as the actual traditional banking systems in place. Or for example, the industrial operations support systems such as the ARPs and so on and so forth.
They are not as complex as them. They are in their infancy. The technology, the whole technology is somehow in its infancy. They need to get mature and the way of maturity gets paved in by the first building blocks and Algorand has them all. Everything is in place. It's just something in need of getting built more and more complex systems and they are coming regardless of the bearish market, which is somehow doing us a little bit of damage and harm in terms of developments and improvement.
But if there is one thing, one thing real in this world that is change, so all we need to do is be patient and waiting for the world to do as it does. Change. And I guess let the developers create. Well, everyone is, I guess, awaiting in these bear cycles. Yeah, we build. We need to. There is no way. If you are a builder or developer by nature, you cannot help it. Yes, you complain about conditions, you're uncomfortable, you somehow may go through some hard times.
Nobody can tell you your mood changes, everything. But the only constant thing if you're doing it by passion and love is building. That will continue. You cannot help it. In your saddest moments, you find yourself behind that keyboard and looking at that code and wishing, okay, let me do it differently. It may work this time. Awesome.
And that makes me think of a few questions that we can tackle closer to the end of the episode when we will talk a little bit about some, you know, advices for aspiring developers and engineers. But with that, I guess, let's talk about the main topic of the episode, Go Plausible. So what is, I suppose, the first story behind Go Plausible? How was the project first conceived?
If you could perhaps start with a little story behind, you know, the first days of the project and perhaps if you could also give a bit of insights into how you researched on, because I assume you probably had to look at some examples in the bigger ecosystems outside of Algorand and curious if you also found some interesting lessons there as well. Sure. Actually, it goes back to a year and I think a year and a half, yes, something like that ago. And that was when I was working on some projects.
I was freelancing on some projects. And during the talk with Adriana and separately with Johanna, they just brought on a very interesting subject of Algorand events and venues and the requirement for them to be somehow, first of all, trackable, to see, OK, what are the statistics? How many people have been met? How many people have been engaged to? And also be more engaging and more interactive.
So at that and at the time, there was a project, Proof of Attendance Protocol, that was mainly working on their ecosystem. But that project was the only project that somehow was workable through these scenarios for unchain recording of your interactive attendance or participation or something like that. So, yes, naturally and obviously, I just look at it as a problem.
And in order to solve it, the first step was research and to see what is available, what are the other opinions and comments regarding that and how other people are trying to solve that. But the fact I just found was that that was because solutions are different.
Some solutions are short-term solutions, more like a patch solution or band-aid solution in order to, for example, a very simple example, when you cut your finger, you just use a very simple band-aid or use a napkin in order to stop bleeding and everything else would take place naturally. So no more continuous solution for this problem, problem solved very easily within a minute or so.
But for some more serious problems, you need a routine, you need some methodology to guarantee that what you provide as a solution is first future-proof and then guaranteed to be continuous, continuously able to grow, continuously able to extend, to expand, and so on and so forth. So first of all, that was the only project that was active on such a similar domain that our problem was on. So the competition scope, when we want to refer it in the future in this discussion, was completely narrow.
There was only one actually project. The other projects were located mostly on Web2 space. So for Web3 space, there was only one project strongly working and having a solution at hand, PoApp project by PoApp Inc. So I researched them and I just found them mostly non-decentralized and mostly non-permissionless to very, very high extents. So I just learned from it. I just set it aside and again went to the problem, raw problem itself.
How are we going to solve it in a very dynamic way, Lego way, if you want to call it that. So I came to some idea of, okay, for anything that happens outside of an unchained realm, we can have a replica proof, unchained, and actually if we could implement it dynamically and by configuration enough, by dynamic configuration enough, we can make sure that we have a system that can generate a proof for anything and distribute it unchained to as many wallets as possible.
So that was the idea proof of anything. That was when the idea proof of anything got born. But since the first step, I would like to think myself as very simple and classic man. So I have a classic mindset and very strict mindset about some things. One of them is respect. I always try to have that respect, respecting elders, respecting those who are pioneers, who tried some way for the first time, so on and so forth.
So other respect, I just said, okay, let's start by having a combination of the blockchain that I'm working on, algo for sure, and the name of the protocol, which took the first step, so that was totally honorary and out of respect move, I just code name the project AlgoPoA.
But when the project got built and got some traction and got some usage, we got some frictions with them and somehow got threatened by their lawyer with the lawsuit, sent some attachments, some PDFs, letters here and there, and we decided, okay, it doesn't worth it, respect to some extent, and okay, let's move beyond it because since the first inception of project on the project website, there was a presentation of proof of attendance on Algorand, but the
attendance word was glitching into anything. So the idea was there since day one, and we just made it extra unnecessary movement out of respect, which we corrected along the way and rebranded to GoPlausible, and here we are now and working under the name of GoPlausible as a brand, and the protocol name is plausible, and the unit of operation is called Clause instead of that POAP name that they are using. So Clause is the operational name for proof of anything on Algorand. I see. I see.
Yeah, I mean, sorry to hear about some misfortunes in regards to the original name. I recall that, I guess, yeah, on some sense, you could also view it as a way because since... The name was weird, I think. I've been told that the name was weird, and I some kind of agree with that. Algorand POAP was some kind of not as... For brand name, I totally agree with everybody that complained about that. It was a little bit weird, but again, project code names for startups, they're all weird.
They're called night toots, something like that. They're code names, and when you got into some phase that you need a brand for some serious business, you need to change them into some serious names. Yeah, exactly. And there was also, I guess, there's just far too many, I guess, things in the ecosystem that at Algorand in the beginning, so I guess this was also one of the ways to see how this can be worked out.
But on the other hand, also maybe for some layman listeners there who are not entirely familiar with what the heck we are talking about here, if you perhaps could also, before we also talk a little bit about some of the major features of the platform, if you could showcase a very simple explanation of how exactly this platform can help you for a scenario, let's say, and I'll pick something simple.
Let's say you have a friend who is essentially organizing a concert, let's say a concert for a nonprofit organization. He has rented a place that has 100 seats and he goes to Google or Bing or whatever and he starts typing, okay, how do I make a registration system for that? How do I prove that these people actually attended this and they maybe made some donations? And I feel like this is where GoPlosable can fit right in and solve this exact scenario.
But if you could explain a little bit, well, how? For whatever event or interaction or whatever physical event that happens in the real world, you can create or generate the proof. And regarding your example, when those 100 people go to that concert in order to explain how does it work, they can simply find a QR code in front of their seats or some are displayed on a screen or on a banner.
When they scan that QR code, each one of them can claim a proof regarding their attendance or participation in that event or venue is as simple as that. And when an event organizer wants to find out, okay, how can I do that?
When they go for searching on Google, if they use the correct set of keywords, because we work extensively on our search engine optimization process as well, if they use, for example, the words proof, distribution, and algorithm, some correct keywords, they will be leaked on the first page of Google.
And also, if they ask the chat AI agents such as GPD, very soon they will find out that they are getting aware of this concept and they can guide them, okay, go plausible and POAP, Inc. are currently the ones that provide such a service. So it is easy to find us. And in order to provide those proofs, in order to be able to author them, you need a Web3 wallet.
And in order to claim them or participate or somehow claim a proof that you have attended on that event or in that interaction or somehow anything else that needs a proof, you don't need to have a wallet because we, since the start, we provided the claiming process to be available in a Web2 and a half way.
So either you are just a pure Web2 user with a browser and internet connectivity, or you're a Web3 user with a wallet on hand, you can claim your proof by simply scanning that QR code or clicking that link that is being sent to you as an attendee. I see. Yeah. So essentially to recap here, the main use case is extremely broad and generic. Anything that requires a proof and anything that requires certification of a proof based on certain physical event, right?
This could even be expanded to... I could also bring up a quick reference to one of the projects we are maintaining at MakerX called Data History Museum, which essentially stores records of earthquakes that happen on Algorand blockchain. And I assume this is also one of the use cases where perhaps AlgoPoOP could be helpful as a platform to essentially streamline some of this business logic on chain computation to an already existing protocol that is built for anything that requires proof.
And just to clarify a bit on the terminology, I guess for people who are not familiar with what Web2.5 means, this is essentially just a terminology referring to usually authentication or authorization flow that hides certain aspects of dealing directly with wallet providers. Not everyone is of course, I guess, comfortable or familiar with dealing with mobile wallets and memorizing 25 words in your memory.
So there are certain businesses currently being built in the industry that provide features that allow you to hide and abstract this away. So you present it with a regular email, login flow as usual, but then behind the scenes it actually interacts with an actual crypto wallet. But it just makes the experience a bit easier for you because it's still...
Web3 is still a very small, I would say, exposure towards if you compare it with big giants or technologies that everyone is essentially grown up with already. So that's what Andrzej meant when he mentioned Web2.5. We are on our own way in order to get there and no good thing in this world can be achieved in a very short amount of time. It needs patience and dedication and when we pay that much patience and determination and dedication to the subject, it will grow, it will be dominance.
Yeah, exactly. But yeah, the topic of adoption of wallet providers, this is probably something that an entire different episode can be dedicated to because it's certainly somewhere in the middle, right? It's not there yet, absolutely not there until you have your parents and grandparents grabbing the phone and easily navigating through all the complexities of these current interfaces to send a transaction somewhere similar to how they do with banking providers.
There's not going to be a lot of change. There's definitely needs to be something in the middle in terms of convenience. Maybe because of the nake financial or monetary nature of how did this technology start? This technology didn't start by how can you send the postal card to your friend? It started by how can you transfer money easily from this point on the globe to the other side?
So by starting on that nature, because to all users and humans, you just open a very simple learning app or game app or music app on your phone without any care. So you open it, you explore it, you give it some try, you make some errors, you learn it and you get savvy with it within no time.
But when that same user starts to work with something that works with money, bears the name of wallet or something like that, immediately it totally goes to some other areas of brain or pink matter in terms of processing. It's totally processed in different realms. You're more careful, you're very well focused about what you're doing, you're trying not to make any mistakes because your brain tells you that if you make mistakes, you lose money.
So this may be one of the biggest barriers of having blockchain technology as a day-to-day tool for everybody and every average user out there to go to for their daily lives and tasks and businesses and errands.
In my humble opinion, if blockchain technology and DL technology started on, I'm not going to say non-financial, but as much as the financial activities on the other side of real-world use cases such as, for example, tickets, fan clubs, transaction for real estate, so on and so forth, you can count them infinitely.
If that was as a strong aspect as the monetary or financial aspect of blockchain technology was at the start, it could have gained a lot more traction and usage in common users out there, in my humble opinion of course. But luckily, I guess this is certainly some of the areas where the expansion is happening, mentioning things like TravelX perhaps in South America. And things like Lofty AI or Kubeye.
Yes, the museum projects, the national heritage projects, real estate projects, you name it, the Planet Watch project, everything. So these give this domain of technology a more natural and day-to-day life sort of image and people can communicate with that more easily and they're daring about it. They can try an error about it and that will make the use case and the ground for use case a lot more vast than it already is. Yeah, exactly.
So anyone who's saying that there's not enough use cases for blockchain, well, probably it's simply because this is an area of expansion yet and the tech is being built doesn't necessarily mean that, as you said, it's just a lot of traction and money and investment went into this industry and grown around this notion of let's live in a utopian society and replace everything with code, code is law and basically let's flip the banking system.
But of course, this is not how things are going to happen in real world and these technologies are certainly useful in scenarios where you might probably not expect it. Exactly. And to continue on this, perhaps maybe if you can dive a little bit deeper into some of the, let's say, major features of the GoPlazable platform, you mentioned modularity and composability.
I wonder whether I assume in this case we're referring to individual features of the platform that you can also use independently or you can combine them to have a, I guess, for more feature reach functionality available. And yeah, I guess after that we'll be very curious to have a quick rundown over the architecture as well if possible. Sure. First of all, all the features are basic elements of any interaction that could happen or occur in the real world.
You have actors, you have data, actual data that somebody is being transmitted or communicated and you have the metadata which describes the whole scenario and scene for this interaction to happen. So what we did is that we actually are doing as well because it is a continuous development and continuous effort and it is growing and we are not by any sense near our vision about how we envision the GoPlazable or plausible protocol to be in the future. So we are building toward that point.
But the ultimate goal is that you have all of these elements as configurations and as configurable constraint controllers within a very, very simple user-friendly Web2 format form that for each one of them the author user decides whether to enable them or disable them and when enabled some very simple configuration to somehow tell the system as the rules to tell the system what are the boundaries of this constraint controller. Let me give you an example.
You as an author are now free to use, for example, to set geofencing controllers for your proof distribution or not. When you decide to enable them, it's very simple. Just enable a switch during your setup form. There's only one setup form through and through the whole platform to create the new PLaZ. So it's not a complicated process, three-step process. You sign, you set up, and you finalize. That's it.
And within that one form, through the whole application, one Web2 form, you have different options. You have steps of setup. You can ignore them because the only mandatory thing for a PLaZ to be created is the title. So you can choose for other options to be enabled. And when you enable them, for example, when you enable the time constraint, you have the ability to specify the start time and the end time and the time zone for that specific PLaZ scenario that you are setting up right now.
And same goes for geofencing. Same goes for enabling the page feature and lots of other features that are very simply a toggle switch. They can be enabled and configured very simple Web2 way within a form. So this is the meaning of modularity and making everything configurable by the user when you create the scenario, you prove distribution and generations in that area. And just for the listeners out there, and MJ mentioned PLaZ. So PLaZ by definition, I suppose, is two things, right?
And I'm looking into the Gitbook, which is very detailed. I think there are some sections that are going to be covered once you guys progress on the roadmap, but you can already check it out at goplosible.gitbook.io and it's also available at goplosible.com. You can find the documentation there. So PLaZ is basically an operational unit for the PLaZable protocol. And essentially, each of those is represented by a smart contract, which I assume user is basically deploying when he creates the...
It's dynamically. It's behind every complexity behind the scene. But actually, when you create a PLaZ, you are creating a smart contract, dynamically creating a smart contract, which is created by contract to contract calls from the mother smart contract. And by using the ABI calls, you control that via the mother smart contract. You say, for example, for prod operations of those contracts. And for more...
For interactive proof distribution features, you only interact with that created a smart contract. So goplosible as a service provider vendor goes out when you are interacting with that PLaZ scenario and just have the governing role when you as an author want to do something with your contract. We don't interact with the end user interactions with their PLaZ. So each PLaZ in its own is an application.
You create it on a fly, you set it up, and it is there on chain in order to serve the APIs to the new user who want to claim their own versions of that PLaZ from chain. So it is a totally independent application on chain, which is dynamically created for your specific purposes reflected in your setting up scenario for that specific PLaZ.
And this, I guess, also brings up a point in regards to some, perhaps users may be listening to this and wondering what is the key distinction between systems like this and let's say a person going to some sort of, let's say web tool provider that provides this software as a service platform somewhere and essentially just hosts his entire infrastructure on some AWS or Google Cloud server.
And it may seem easy, it may be familiar and things like that, but the key distinction here that MJ just outlined is that the control over the PLaZ that you as a creator essentially is issuing and then the PLaZ acting as the application is completely decentralized in a sense that there's no governing control over that entity. If you deployed it, you own it. It's a completely different, I guess, paradigm of shipping software.
It's not like an owner of the platform can go ahead and just patch something and essentially deploy it to the cloud provider and the cloud provider goes down and entire ecosystem of things that are built on top of goes down or there's an exploit or a hack.
Of course there are hacks and exploit and there's a lot of them in web 3 as well, but there's a key distinction in regards to how they happen because in case of a smart contract, for example, there's a lot less mutability in this regard because once you deploy the code it's not like MJ can just go ahead and change anything he wants in the code.
PLaZ is one of the contracts that are audited and that's also the reason why there's so much money being made in the auditing industry is because people are paying a lot of money for people to make extremely detailed audits for a code that potentially isn't going to change for long periods of time.
There's examples in the Ethereum ecosystem where you could see protocols for decentralized exchanges, for example, being live for decades basically and some things are being built on top of them, but it's a different mindset of building apps essentially. Of course, not talking about decentralization, we had 17 episodes on this podcast already.
Hopefully listeners of this are familiar with the pros and cons of having a centralized architecture versus a decentralized architecture, but you can name a very good chunk of attack vectors that simply hosting on a decentralized architecture is going to prevent you from being worried about. Once again, you essentially own your own infrastructure completely.
I guess if we could just briefly touch on the architecture as well for a bit more tech savvy listeners out there who may be interested in, and from my side, I'm also interested in regards to some tooling, would be very curious to hear some of the feedback in regards to the tooling that you were using to build these particular smart contracts. As of today, just a quick recap, in the Algorand ecosystem, we have a large variety of different transpilers. Some of them are community-based.
There's still some people, I don't know if any, but there's still probably some very hardcore people who are doing pure teal. We have PyTeal, we have Beaker, we have AlgoKit. One of those dinosaurs. Just use raw teal. I have a confession to make here. I cannot. Very easily, coding Python, it's a matter of the paradigm of coding. I'm unable to use PyTeal. I tried.
Honest confession, even other variations, because when it comes to SAC programming, raw teal is the best representative of data SAC programming. Your mind gets around it and comprehends it pretty much easily, and actually it goes with the flow. Your mind operates in the SAC programming when you're using raw teal. But with others, I'm more familiar and comfortable with using OK. If I want to write complex systems and go OOP about it, yes, let's use those languages.
But raw teal serves me pretty good. I love it. I love it so much to the extent that maybe it's some kind of mental projection and self-disability about it that I cannot use PyTeal and other high-level variations of teal. But I'm pretty comfortable with raw teal. Wow. Yeah. I suppose you're also dealt with building ABI compatible methods in PyTeal as well? Yes, completely. Because the whole thing is working based on contract-to-contract calls, so it is somehow woven into it.
The system extensively uses the ABI's all over the place. Since the inception of your contract, you're creating your new contract. You set it up and you activate it as a scenario. After that, it's you as the author and your application smart contract out there. But again, then, all of your interactions are in terms of ABI calls into that contract. So it is you as some either author or claimer and that application on chain, and you're interacting with that from client side.
No middle servers, no hubs, no nothing in between. So direct interactions and those are needed for there as well. I guess someone needs to make a applause to give a batch of additional certification to builders in the algorithmic system who build complex platforms in the road here because it's certainly something that is somewhat of an accomplishment. Actually I come from those backgrounds.
If you go back 25 years ago, actually many and many of the syntaxes that we used back then were actually happening in this very, very simple stack-oriented programming style. So I somehow didn't find maybe for some because tooling landscape has changed massively recently and most recently by the rise of AI. It's totally different paradigm.
I totally agree that my mindset may not match the new very young developers who are starting from very, very young ages right now because yes, you need to cope with your available tool set around you. When you're coming from age of axe and hammer, you are more comfortable with axe and hammer. But when you advance to the age of chainsaws and drills and everything, you advance your mindset in order to be able to use them.
But it never hurts even for those that are using the higher level languages, tool sets, whatever. It never hurts to understand what's going on the knees, especially if you are an engineer trying to create a system.
I'm not saying being able to work with that or write code, complex code in raw teal, but being able to read teal and understand what is going on without even needing to decide to disassemble it, you just read the whole context as okay, it does this, it does this and it checks this and it checks that. This much is totally recommended for everyone, even if you're not using that. Yeah, exactly. I totally agree with you.
I guess this is also one of the big hurdles in general if you look into Ethereum is often brought up as an example of that because they also had the same evolution in terms of languages. Algorand is still in comparison, I would say, in early ages in terms of improvements in the tooling and things like that.
It's definitely evolving very rapidly, but Ethereum also had a very similar non-structured bytecode-like language where that essentially was migrated into Solidity and then on top of Solidity, you're also now having more dynamic variations around it. I guess just to bring also the point that first of all, I completely agree with you. There's a lot of very prominent builders in the Algorand ecosystem who are very paramount on advocating for not hiding AVM complexity, basically.
Not to say that AVM is complex, like any complex system is complex to the point until we understand the system. Since we are talking about blockchains and in most cases, the application builder are for financial sector, systems that are built for financial sector never should compromise on consistency. If you're not compromising on consistency, it means it has to be extremely secure.
If it is extremely secure, it means that you probably should understand the stack on which you're building to a very lower level extent, which in case of Algorand implies being able to at least understand the syntax of Teal, understand cases where perhaps building something is beneficial to be done in Teal versus more higher level abstractions because anytime you go higher level, it means your computer has to make N-hoops from the code that you're
typing that is structured to convenient to understand perhaps with the modern paradigms into this that may seem a bit overwhelming some people, but once again, this is potentially something that can just make your understanding of how secure the platform is and maybe identify attack vectors. Then another point towards raw Teal and maybe for systems that are doing transpilation to Teal, the more structured your raw Teal code is probably, the easier it will be for you.
You're also going to make the life of people who are doing audits a little bit easier because a lot of those processes, they also involve reading actual Teal code. It's not just like you pay $100,000 and they just look at your Pytiel code and say, oh, it's all great. Now you can go shift to main net. They will inspect it very thoroughly. They'll look into the raw Teal and sometimes it also matters that the code is structured that is in a very readable format.
It's not just the higher level abstractions that you build, but the lower level stuff that is transpiled or built directly has to be potentially easily readable and maintained in that regard. I can't speak much about the stuff on the AlgoKit side, but I'm always inspired by folks in the industry. If you're familiar with Chris Latner, the guy behind LLVM, Swift, and now he's doing this awesome thing with Modular and the module language.
I think this is potentially something that the paradigm, for example, like in Swift, the higher level looks like a very sort of... It seems like it indeed does have a lot of syntactic sugar, right? It's optimized for iOS, micro-OS development. But then if you're a hard-core engineer, you can always sort of have some guardrails, disable them and just have a relatively low level access to C-like primitives and things like that.
I feel like this might be the sweet spot eventually if Algorand matures enough to satisfy hardcore users in the community who are paramount on the fact that AVM has to be understood and the complexity which shouldn't be hidden, it should be simplified versus people coming from other ecosystems or web too who are building basically simple things and just starting to learn these. That range of flexibility is potentially something that if done right could be a great boost overall for the adoption.
But well, we'll see. The future is still ahead. So it's... Let's hope that it goes this way and it proceeds as it started and getting pace more and more and more each and every day. And I guess I'll just try to outline it from my understanding of... I was looking and doing some minor research on the documentation you have from the Go plausible, so for people I guess interested in the way... First of all, you also mentioned Teal... Oh, okay, it's number eight, sorry.
On the page with the architecture description it says Teal B, I thought it's B. And I was like, wait, is that like some sort of... When you work for it, it's number... It's Teal, okay, okay, okay. Okay, yeah, that explains it then. So I suppose there are sort of three main primitives that you can look at if you're just strictly... And once again, we're strictly talking about the smart contract architecture. So we have the parent smart contract, right?
I guess it's some sort of registry that essentially is a starting point once you want to create your own pull-up. And I assume this is also made so that you are not wasting the minimum balance requirements in the wallets of the users, right? And so that you can also have sort of a single track record to see all of the main interactions with the parent. Because if you were to do it from the user's wallet, right, that would get a bit inconvenient.
User basically in this case has a lot of complexity, his minimum balance rises and things like that. So you come in to the platform, you interact with the parent smart contract, then you're firstly deploying it. I assume it's also used for managing fees and things like that. And this is probably the only registry contract that managed by GoPlosable as a platform in that regard.
So by managing it means that we just can update it because all of the update, for example, we started with till version 6 and upgraded it to till version 7 and finally till version 8. So all of the features and functionalities added during these upgrades, they are reflected mostly separately on child smart contracts and parent smart contracts. So parent smart contract is getting updates occasionally based on the upgrades that are available from the AVM itself.
And then we have, well, ABI, right, the ARC4 specification and using ABI you have the Plos entities, which are the smart contracts that are created by the parent entity. So now Plos is once again something that you as a user of GoPlosable own. Basically this is what controls different metadata that you can assign to this Plos and this is what your users of your Plos are interacting with when they are essentially doing the claims.
And the claim in this regard is represented by an Algorand standard ASA, right, an NFT. For now, yes. For now, yes. So the hopes are that by rise of non-ASA NFTs and assets on Algorand, for example, ARC72 type of assets on Algorand and by supporting them with GoPlosable, we can open a whole new horizon for new opportunities and implementation scenarios.
So for now, they are Algorand standard assets, but it will somehow get elevated and get improved by time and we will be enjoying lots of new opportunities by rise of those kind of non-ASA NFTs and tokens. Awesome. And basically, yeah, so the standard assets is something you get after you deal with the claim and essentially this is your digital certificate in some sense for people who are familiar with, for example, systems like Udemy or Coursera, you usually get the certification at the end.
They also have a, in some cases, it's a centralized sort of record where you could see that on this date, this person received this and this. So this entity... Completely decentralized. The only difference is here it is decentralized.
And one other thing I might add, the difference between these NFTs or ASAs or tokens being transferred and the normal, for example, NFTs that you transfer from your wallet is a process that the declaimer and the user going through in order to being eligible to being sent one of these certificates.
All of those processes and interactions are available on chain and you know about usability of the state proofs in the future when they are totally routinely used as a day to day technology from AVM, we have the opportunity of creating the zero knowledge proofs for every and each one of them. And without, for example, exposing your... Personally identified information. Exactly.
You get a certificate from, for example, your learning or educational institution and without even going that NFT or showing the content of that NFT, you can prove that, okay, I received that NFT by just sending an estate proof and getting it verified on the other side. So this is the ultimate vision beside going multi-chain for Go Plausible. This is the last piece of the puzzle, which makes it the real and feasible proof of anything protocol.
So by using that technology, we're just waiting for it to get matured and get within the tooling ecosystem of the algorithm. And after that, sky would be the limit. You can do anything and you can just prove it without exposing or disclosing anything and that would make the perfect ground in order to nurture some such solutions.
Exactly, and I guess we live in very crazy and unprecedented times, but that makes me hope that sometime in the future, governments are going to be a bit more privacy aware in certain regards and maybe zero knowledge is going to be the foundation for things like verifying your nationality or verifying your digital identity without actually exposing a lot of personally identifiable information out there because privacy has always been
important and it's always going to be important, but the times are showing that there's a lot of struggle all over the world in regards to privacy versus non-privacy systems. For sure, and you're giving out, if you consider your daily lives, you're giving out treasures of information on daily basis, whether it's on your identity, properties, everything, you name it, you're just somehow bleeding information out to those centralized services that you use on daily basis.
So why not stopping all that information flow and make the user in control and somehow give this ability to user that without, for example, I'm just mentioning a very extreme case, without sending your actual biometric information into every and each service that you encounter with, you can somehow send a proof of it that, okay, I own this biometrics of this identity without exposing that biometrics by itself, for example, by using a cold pad. And these are not fringe technologies.
Currently they have been in use with intelligence organizations for more than a decade right now. For example, the one I just named the cold pad, the cold pad is for biometrics without exposing biometrics. That has been in use, not using the same technology as zero knowledge proof, but using some sort of technologies, they are somehow known to those who care for privacy, care for security to some critical extent.
So by making it global in terms of usage and accessible to everyone actually around us, this technologies has the potential to actually gain the future of any kind of operation and interaction that is happening digitally. Because yes, when you have the option of not disclosing your vital information or critical information, you will choose it for sure. When you have the option of closing the door to your house, you close it. You have a door, you keep it closed, you don't open it. Yeah, exactly.
And I guess, yeah, like the comfort is usually the big enemy of privacy. The moment when privacy will become the second when privacy will become a bit more, I guess, accessible and easier to use. That's when people will just stop having issues with adopting these protocols and systems. But just to ask a few brief questions on the architecture as well, I just find it very interesting.
It's probably my favorite part of the episode usually when we do discussions specifically on the architecture and things like that. But I assume that majority of the core business logic is completely on chain. You have the front end, which I assume is in most interactions, it's just interactions with the contracts on chain. But are there any particular features of functionality that still rely on some off-chain computation? And if yes, can you just briefly touch on that?
The front end and some utility services are hosted as a cloudflare serverless modules. So they are very, very much distributed and distributed beside some core internet services. So they are highly improbable to go down or to be somewhat canceled or stopped or something like that. Because if they go down because of the architecture and the majority of the nodes and services that Algorand relies on, we can make sure that Algorand will suffer as well.
So those very fundamental infrastructural services, such as, for example, the 1111 DNS by cloudflare, if that goes down, 40% of internet goes down. So that would be a catastrophe. And during catastrophes, no one counts how many services go down.
But aside from those global catastrophes that are highly improbable to happen within the considerations of those services, we needed to because the decentralized hosting services and content delivery services, they are not to the extent reliable that we needed for this service to be distributed. But serverless modules from cloudflare, they are distributed to the edge. They are actually processed on the edge instead of being on a centralized place.
So if you're living on Montana, your function alongside with your static content are being processed and accessed from your nearest ISP in your region, Montana. So they are not a call from a server from half a way around the world. This single property had us use the cloudflare services in order to provide for content delivery and also some utility functions, for example, logging. Logging of the requests as pure as they come from the browsers.
It is a necessary part of the story because we need to keep an eye on some sort of security aspect of system, for example, avoiding some attack vectors and so on and so forth. Logging, something like that. These kind of utility services, they are provided as serverless functions and modules or as they call it on cloudflare workers.
Serverless workers are the third part of the system, but they are not anywhere when you as a claimer interact with your, with on-chain smart contracts using your front end. It is all you, your wallet and algorithm blockchain. When you're claiming something and only when you're creating something or interacting as an author, those logging and everything else located as serverless functions kicking the place in order to keep a track of everything going on in the system. I see.
Things like, I suppose, geofencing is all primarily thanks to the serverless on the cloudflare. Yes. Essentially, just, I suppose, three major parts, right? The front end, geofencing done through cloudflare and edge compute and serverless and we have these smart contracts, which are all left completely on chain, basically. The rules sets are part of a smart contract. They got a sword on a smart contract. So you cannot change them.
For example, you cannot change your geofencing schema and your rules toward that. But implementation and how, for obvious reasons, you cannot, you know, store a large amount of arbitrary data on chain.
So we just use some these serverless workers in order to do that off-chain computation aspect of the operation and just keep the critical and very vital and vital to be immutable and vital to be kept forever part of information on chain to maintain the balance between what is needed to be processed and what is needed to be stored on chain.
Exactly. And I assume this also implies comparing the same infrastructure that you would have maintained if you were to build this on a cloud hosting provider. I assume the hosting infrastructure fees is significantly... Actually the fees about, you know... Do you run your own node, actually? Do you guys run your own? No, no, no. So this is a redundant, failover redundant and failover structure between the node that I locally host on my DigitalOcean servers and also the quick node.
The quick node, here's a big shout out to those guys. Very close collaborations and very, very good service I just experienced from them. So very happy with that. Right again, because you always have that last connection and that last connectivity to tend them to be on your turf. So I couldn't somehow avoid that mindset.
And that failover structure, my local nodes are mainly used for developments and everything locally but they kick in automatically whenever or by any reason the quick node service got disconnected or a problem, they kick in automatically so user will not see or face any kind of shortcoming or disconnection in the service. I see, I see.
And perhaps to continue on, you know, if you can also briefly just touch on what were sort of the, I would say the main challenges you had during the implementation and any perhaps interesting ways in how you test GoPosable? Yeah, for sure. The main challenge is, you know, what is obvious to everybody on the main. I think a little bit of wrong time because when I started to contribute, I was still living in my own country, which is a little bit restricted in any sense you can call.
And also, we were in the list of embargoed countries. I'm originally Persian. But for Algorand and in order to be able to openly and freely contribute to Algorand, I just migrated to Turkey two years ago and I'm living in Turkey since then. But I think there was a little bit of time mismatch. So I missed the fortune train by a little bit because now we are living in a bearish market, very harsh time.
And actually, right now, the only problem we have is the actual fuel for this progress, which was plenty available even one year, one year and a half ago, but now it is not that time.
But for that, we are going to in order to face that challenge, the only solution that we found was that not to limit ourselves to one community and one ecosystem because this is a globally usable system and we try to put everything complex in behind the scenes and just show a very familiar Web2 style of communication and user-ship to our daily users.
So hopes are by going multi-chain and going a little bit more specific on more day-to-day usages because right now it is some kind of author should know what they're preparing the plus for, but some kind of atomic use cases, one-click creation of some use case, for example, those use cases along with going multi-chain are our strategy and hope to work getting out of this somehow restrictive era of non-nurturing era that we are facing.
And for sure, the policies coming from US policymakers is not helping these harsh times and by much, but hopes are and when you listen to Congress congressional hearings, the bills are being proposed and everything, the progress is on the pace and they're getting the feeling of okay, this is not just yet another technology popping up from everywhere that trying to change the world.
This is something essential to the future and maybe we need to think twice about it, think a little bit deeper about it rather than just give it to our consultants and ask for their opinion. It is happening. You just listen to those hearings, the senators, the Congress people, the way they talk about the things is not superficial anymore.
When they're talking now, it means that, okay, they read something at least a large amount of information right now, but a year ago, it was all superficial, just naming some acronyms and get along with it. Now it is very good. So progress, progress is always good. And I mean, I don't think there's anything wrong with going multi-chain and at all, even Silvio himself was mentioning that there's going to be N chains that will eventually endure.
Each of them will have their own unique edge cases and will serve its individual functionality and it's all about interoperability basically. I guess Algorand's strongest point is micro payments, extremely fast finality.
You can essentially rely on this because it scales to a large set of people and there's a lot of unique things that could be built around the fact that the finality is so short essentially, but it doesn't necessarily mean that it's the only chain that essentially should endure and survive and it should capture the entire market. That's why Algorand has state pros. Right? Exactly.
It's still a very undervalued piece of tech because well, we don't live in future where quantum computers attack everyone every day. But the fact that it already exists and it's sort of this beacon of potential interoperability is really telling about how Algorand tries to foresee the events that can potentially come in near or near distant future basically. You mentioned state proofs. Something you can expand on this just briefly.
Actually, now that I have this opportunity to ask for something very publicly, it would be very much appreciated if this line of work and implementation get a little bit more acceleration because in my humble opinion, this is one of the things that can persuade and actually push other non-web3 policymakers or decision-makers to think twice about usages and use case of this technology and give it a more serious thought and more serious look and assessment.
I think the sooner we can get the state proofs into our daily developer and daily user routines and processes and normal operations, the bigger chance we stand to be recognized as globally usable technology, not only as some means for some financial systems and so on and so forth. A global tool and a global problem-solving platform and solution. This can help it so much. This is one of the golden keys in my humble opinion.
Again, maybe many people somewhat disagree with me, but I think this is one of the greatest vectors, one of our power levers that we can count on in order to achieve to that point without any descriptive efforts because right now we have to explain why this is good. It is immutable. It is fast. It is proven you're able to have your history intact for your transactions and so on and so forth, lots of properties.
We need to describe them, but with zero-knowledge proofs implemented in our daily operations and daily real-life and real-world errands, I think that that's a landscape changer. We can see it clearly that, okay, this technology doesn't have a peer in traditional world and we need to employ this one. We need to give it a chance. We need to use it. Exactly. Yeah, I absolutely agree with you. It's certainly something that is a great vector for expansion.
After the original announcements on this, I think there's still a lot of work in terms of general applications of state proofs and perhaps a lot of room for expansion and collaboration with other ecosystems in this regard.
The biggest issue right now is simply you need to have a nurturing environment for people to build on a technology, not because it's great, but a lot of ventures, for example, you wouldn't invest in something that doesn't potentially have a large target audience or whatever in case of something that promises for a lot of security. It needs a lot of actual applications in this regard.
Potentially Ethereum, for example, is a big, big, big competition to basically overcome because the main sentiment that started popping up, and I guess it's common for bear market in general, is like, okay, I got N dollars invested in me over the past couple of months. I'm about to choose a blockchain on which I'm going to build. In most cases, he or she will have to prioritize picking a blockchain that has a lot of market cap. Unfortunately, biggest out there is, well, we have Bitcoin.
You can't really do a lot of smart contracts on native Bitcoin layer. Of course, there are tools that allow you to do it. I'm not going to get into that discussion. Then you have Ethereum, and probably Ethereum is currently the biggest one. If you get on that, you probably get a lot of exposure to the users. That's the big barrier at the moment. It's not even coming to discussions over technology these days.
People given the conditions of the market simply have to prioritize something that can sustain profits over a period of time and then potentially expand to things that are obviously more beneficial and more efficient and more secure and things like that. Once again, it all comes down to adoption. Essentially, that's what a lot of people, I think at Algorand and affiliates, are being focused at the moment. It's the period of trying to see. The direction is totally okay.
Things may not be nurturing, but the direction, I think, the community and the technology, totally okay. It is totally correct because it's aiming. Yes, the road is very hard. One other thing I just want to add is we should not forget about the timing because every technology, when it starts, if it's some kind of interesting technology, it experiences a hype time. During that hype time, when Ethereum was hyping, there was no Algorand. They are ahead of us. That is the fact we cannot evade.
We should accept that. But when we are intending to create a paradigm shift and actually have all the potential and everything necessary in terms of means, tools, potential, talent, whatever it is, we have it, I think it's just a matter of time. Again, being patient and deterministic about it and just think about the destination and going toward that.
With that mindset and just with a little bit of time passing, I think after because we just witnessed the hype time and hype era for all cryptos coming to an end. After that, if there is a hype, that's a hype based on real use case and real value because the values that those tokens, yes, Bitcoin is some kind of exception because it was the first one pioneered the new era and so on and so forth.
But for other ones, some of those token prices are not based on the real value and the real impact they create in the real world. They're just hype. Yes, it is, for example, a thousand dollar or something like that. You cannot do any kind of math around it. There is no transaction wise, user base wise, even a simple card payment provider in traditional banking will beat them easily in every kind of calculation on math, numbers, volume, everything.
But they experienced a hype era and because their community were totally felt comfortable with everything that they have, their NFTs, their marketplaces, the DEXs, everything, it continued to keep some of that momentum they gained during that hype era. For Algorand, that hype era didn't last so much. So we didn't gain so much momentum during that and we are somehow exhausting that little momentum that we gained during that hype era. We didn't go beyond two dollars, if I'm correct.
So the momentum is smaller, but the potential and talent is much bigger. So we stand a lot more chance if we got solid in what we do, go toward, don't think about the hardship. Yes, we get damaged, we get hurt for sure, but don't think about it as I want to quote from the Lawrence of Arabia that says, yes, it does hurt, but the point is not minding that it hurts.
I think this way, with not minding that it hurts and just continue with our passion and our building, we can get there eventually because again, the technology has the potential. I absolutely agree with your point. I've been on an Ethereum conference recently and I had experience trying that competition of yours and I believe it's called Po-op Inc. Yes, yes, Po-op.
I think it was Po-op Inc. actually and not to bash any technology out there and because it existed, but I had a pretty bad experience actually. I wasn't able to claim it. There was issues with basically transaction being propagated.
Up until a year ago, I think if I'm not wrong, you need to send emails to creators and send your information, ask for a Po-op to be created and they got to decide if they want to do it or not and respond to your email and then they created a web form for you to send your information after that. So it was a very good idea, but it was limited first of all to one use case and it was to many extent centralized, to many extent not permissionless and not trustless.
So what needed to be done is to rethink the whole thing and that process led to creation of that proof of anything protocol. Why not create something much better and much more flexible? Doing that as a part of a story would cover a lot of more use cases. Why not doing that?
And once you get used to this 3.3 second finality, you just can go like for example, I think they used Polygon in that case for attendance on the Ethereum conference and I remember it was at least 15 to 20 minutes for this thing to basically get propagated and I thought that it's a bug on my phone or something like that. You get used to certain things that you only start appreciating when you experience other technologies.
Exactly and then you got wondered and shocked what happened because you used to say, okay, I press send and it's there. You go there, okay. I was converting using change now. I was converting something to I needed some ether on my Ethereum to be able to do something on Ethereum. I totally forget about that for a second and it's just okay. Something went wrong with my transit. When is it going to arrive? And it just reminds me, oh, it's Ethereum. Man, you should wait. Be patient.
It's an algorithm. Wait. And I'm realizing that we went a little bit over time. I'm sorry if we spent too much on the architecture side, but I guess just because you already answered a lot of the points in regards to the future roadmap, the things that are being planned for interoperability and I think it would be amazing if somehow state proofs are going to be involved in this regard.
It's certainly going to be the more use cases are there for using this technology, the better for the success of the overall chain and not just Algorand itself. I think it's a great way for anyone to basically use the Falcon signatures. If you want some sort of, which I believe is still to this point is the closest thing to the cutting edge sort of research that is happening in regards to things that quantum computers can tamper with.
And if you were to give an advice on sort of essential skills and experience for successful engagement with blockchain development, what advice would you give to aspiring software engineers who would like themselves to try some blockchain development in general? And this is usually something I ask to most of the guests, but if you want, we can be even more specific.
And what advice would you give to aspiring software engineers who are targeting to specifically build something on Algorand basically? So it's, I guess. First of all, in general, first of all, I'm not in that place to give advice, but I can just share what I've learned. Some of it in hard way. First of all, don't get the day you think that you're satisfied, you have whatever you need and you don't need anything more even for a week or even for a day.
Last day you actually somewhat assassinated your career as a developer or as a builder. So as I'm quoting this, say hungry, changing it about knowledge and about gaining knowledge on everything. Don't get settled with your own niche or your domain and always try to say up to date with other trends, other domain, other niches, even if they're not somehow overlapping or interfering with your line of business.
But keeping yourself up to date as a builder or developer with all of those trends and know what's going on in different areas of tech and science would help you greatly. And secondly, I think this is the thing I learned in a hard way, but we are living in a little bit cruel mindset era where all of the developers are advised, if you create something, make sure you get paid for it.
This is a very harsh and very straight and very frank one and coming from actually a very hard experience for me, no matter how kind, how humble, whatever you are, if you create a bit, a line of code and somebody else says, hey, it's good, your next word should be how much are you going to pay for it. Because when you do this, I'm saying this with very heavy heart, but people don't care if Beethoven plays piano on the subway.
Nobody, nobody would even care to turn their head watching Beethoven or Mozart playing in a subway because the mindset tells them if you get it free, it's cheap, it doesn't worth it. So be expensive, sell yourself very expensive in order to be noticed. If you want to be noticed, there's no humble way to do that. Even if it means to fake being expensive, I just learned it very hard way. But for young developers, I totally give this advice. This is more solid than the first one even.
So make sure you take care of your financial aspect and your monetary aspect as well as you have your passion about building because sometimes for people, you need to keep some aspect of your inner child alive in order to be a good builder or a good developer. You need to be excited with technology. You need to be excited with display tools that you find.
So that aspect stays with you and a child doesn't have a regard or doesn't have an answer prioritized level of mind for money or for gaining opportunities or for benefits. If you give in with that part of you and just go with that flow, you will suffer consequently. So keep a balance between the two. Keep the child alive, but keep that adult watching over that child and constantly ask for, here's what the child has created and I'm the adult, how much are you going to pay for it?
That's an amazing advice, I would say. I certainly can agree with this more. The passion, no matter how passionate you are about any particular domain of engineering or just computer science in general, that passion is only sustainable as long as the conditions in which you are are sustainable and to sustain it. We live in a society where a lot of things are unfortunately very materialistic and so you have to worry about that. Well, that is what it is. It is out there.
It is the dominant mindset and as I exactly give you the example, if you take that Beethoven and put him in the Albert Hall Saloon in London, people would happily pay $100,000 per seat for the Franz Rode to hear Beethoven playing and they don't even care if that's what is the name as much as he is playing in Albert Hall Saloon. So it is expensive. So it should be something.
So 70% of people there don't have a clue what the music is, what this player name is, what is his background, what did he go through to be on this stage. They just come here to hear a music which worth $10,000 ticket. This is for 70 or 80% out there and this goes for the VCs, investors and everybody out there, managers, you name it. Everybody has this mindset. So go with it. Don't be carried away with your passion that much. Keep a balance between the two and you will be good to go. Awesome.
And maybe just as a very last note, as you mentioned, being open-minded in regards to the latest trends and things that are happening in the technology.
Just wanted to bring out one very interesting area of research that is potentially something that I guess happening is currently in the Ethereum ecosystem, but the area of zero knowledge machine learning, which I think if I'll grant is to go the way to start exploring zero knowledge further in regards to how things like, for example, state proofs can be applied and etc. There's a very interesting area basically on how you can make things like decentralized
Kaggle, for example, deploying and solving AI challenges without revealing how your model is built, how exactly it was trained, basically introducing a bit more privacy into these systems. But things like that can only happen if you're looking at... Sorry? And the ethical aspects of it. Of course. There is no other way recently, there is no way except for accessing the whole data and the whole process they went through to train those models.
There's no way that you can check for some kind of truth or for some kind of rule or for some kind of... Citation or... ... facts in that large language model, for example.
And by that, using that sort of technology for zero knowledge proofs, you can at least make sure by having an index of these embedded or these inferencing data that is being added as a training to that language model or information model, as you can call it, you can make sure at least you have a manifest that proves that we consider these vectors, we consider these data as factual data, and this is the proof for it.
You can have it, you can verify it without going through the whole complexity of the AI system and also the large model. So it makes the verification in terms of our ethical verification, perfectual verification, everything, it makes it much easier for produce learning models to be assessed and checked afterwards, their creation.
Yeah. So it's definitely going to be interesting to see how this area of research is being evolved and would be glad to hear some ways where Algorand can basically apply its tech because it seems to be the only area of synergy in terms of AI and cryptography these days.
But on that note, once again, MJ, thanks again for being an amazing guest on the show and I'm looking forward to see some new developments and improvements on the Co-Plosable platform and for everyone who paired with us through this long conversation. Thank you for listening and stay tuned for new episodes. Thank you so much. It was a pleasure and an honor. I hope everybody enjoys it. Thank you so much, Al.