Howdy, this is Jim Rutt, and this is The Jim Rutt Show. Hey, this is Jim Rutt, and this is The Jim Rutt Show. Our guest today is Ben Gertzel, one of the world's leading figures in the effort to achieve artificial intelligence at the human level and beyond. what is often called artificial general intelligence or AGI, a term that Ben coined. We will likely use the term AGI a lot today when we are referring to AI at a fully human level and beyond.
In addition to being a researcher and prolific author, Ben is the leader of the OpenCog, open-source AGI software framework. And he's the CEO of SingularityNet, a distributed network that lets anyone create, share, and monetize AI services at scale. Welcome, Ben. Hey, Jim. Thanks for having me.
Yeah, great to have you. Maybe we could start. Remember, our audience, while an intelligent and well-read audience, isn't necessarily an expert audience in AI. Could you tell us what AGI is and how it differs from narrow AI?
and why the emergence of AGI is so significant. Yeah, when the AI field began in the middle of the last century, the basic... informally understood goal was to create intelligence of kind of the same type that people had then during the next few decades it was discovered that it was possible to create software hardware systems doing particular things that seemed very intelligent when people did them.
But doing these things in a very different way from how people did them and doing them in a much more, you know, narrowly defined way. So, I mean, in the 50s, it wasn't clear that, you know, it would make sense to make a program that could play chess as well. the Grand Master.
but didn't approach it anything like a grandmaster did and couldn't play you know scrabble or checkers at all without some reprogramming so the existence of these narrow ais that do particular intelligent seeming things in narrowly defined ways very different from how humans do. I mean, this was, it was really a major discovery, which is quite interesting. And it's left us in a situation now that I think of as a narrow AI revolution.
where we have an astounding variety of systems that can do particular, very intelligent-seeming things, yet they're not doing anything like how people did it. And as part of that difference from how people did it... You know, they're not able to generalize their intelligent function beyond a very narrow class of contexts.
So I introduced the term AGI, Artificial General Intelligence, 15 years ago or so with a view toward distinguishing between AIs that are very good at doing tasks that seem very narrow.
in the context of ordinary human life versus AIs that are capable of achieving intelligence, at least with the same generality of context that people can other terms like transfer learning and lifelong learning that have arisen in the ai community have closely related meanings because i mean to achieve general intelligence
you need to be able to transfer knowledge from one domain to a domain that's qualitatively different from a human view, from the domains you were programmed for or trained for. And it's important to understand humans are not... the maximally generally intelligent system. I mean, from the standpoint of computability theory, Marcus Hutter's theory of universal AI sort of articulates what a...
fully mathematically general intelligence would be like. We're definitely not that. If you give a human a maze to run in 275 dimensions, they'll probably do very badly. So we're not that good at journalizing beyond, for example, the dimensionality of the physical universe that we live in. So we're not maximally general by any means. We're very narrow compared to some systems you can imagine, but we're very general.
compared to the AI systems that are in commercial use right now. So I think as a research goal, it's worth thinking about how do we make AIs that are at least as generally intelligent as humans, ultimately more generally intelligent. And it's an open question. to what extent you can get to human level AGI by sort of incrementally improving current style narrow AI systems versus needing some substantially new approach to get to higher levels of AGI.
With all the various contending approaches out there, is there a rough estimate either in the community or your own or both on when we might expect to see human level general intelligence? My stock answer to that in recent times has been five to 30 years from now. And I'd say in the AI community, there's a fair percent of people who agree with that, but you'll get a range of estimates aiming from five or 10 years up through.
hundreds of years and there are very few serious AI researchers who think It will never happen, but there are some who think a digital computer just can't achieve human level general intelligence because the human brain is a quantum computer or a quantum gravity computer or something. But if... If you set aside the small minority of researchers who think the human brain fundamentally relies on some trends during computing for its general intelligence.
Setting aside those guys for the moment, you have estimates that are... you know 10 30 50 70 100 years not a lot that are that are 500 years and i'd say during the last 10 years the mean and variance of these estimates have gone way down so i mean 10 or 20 years ago, there was a small percent of researchers who thought AGI was 10 or 20 years off, a few thought it was 50 years off, a lot who thought it was a couple hundred years off.
Now, from what I've seen, substantial plurality, probably a decent majority think it's coming in the next century. So I remain on the optimistic end. The trend has been in the direction of me rather than in the direction of the AGI pessimists. That's interesting. Our last guest on the Jim Rutt Show was Robin Hanson. And we talked a lot. about AGI via upload or emulation of a human directly scanning their neural system, their connectome, and representing that on a...
in a computer. It seems to me that one could at least at one level divide AGI approaches into upload slash emulations and software approaches. Could you maybe speak a little bit about your thoughts on those two broadly different ways of achieving AGI? I think right now the idea of achieving AGI via
an upload or emulation of the human brain is really just an idea. I mean, it's an idea that seems to be scientifically feasible according to the known laws of physics, but there's really no one working directly on that. There are people working on supporting technologies that could eventually lead you to be able to scan the brain accurately enough that you could start seriously working on that. Right now, we just don't have the brain scanning tech to scan the mind out of a living brain.
And we don't have the reconstructive tech to take a dead brain, freeze it, slice it, scan it, and then reconstruct the dynamics of that brain. I mean, in theory, you could do that by the laws of physics, but we're way... Without speaking of time estimates, I mean, we're very far away in terms of tools and concepts from being able to do that right now.
The attempt to create AGI via software, be it very loosely brain-inspired software like current deep neural nets, or more math and cognitive science-inspired software like OpenCog. I mean the prospect of creating AGI by software. This is a subject of like really concrete research projects now, so it's certainly much more than an idea. I mean, it may be that all these projects, including mine, are wrongheaded, but at least we're directly working on the problem right now.
So, I mean, conceptually, you can divide AGI into those two approaches, but I'd say those two approaches are, they're a very different practical status at the moment. Yeah, one point I made when I was talking with Robin is that the upload emulation approach is essentially all or nothing. Either you can upload a brain, maybe not a human brain, but a full brain and make it work like it's supposed to.
or you can't. While the software world, one can see, in fact, we obviously are already seeing lots of incremental benefits. And, you know, from my experience in the business world, the investment world, I generally... guide people away from trying to jump up a cliff, right? Go from zero to 100 in one single project. I don't think that's quite a fair assessment because I think the missing link for being able to mind upload.
people or simulate people in detail i mean the missing links are the right kind of hardware slash wetware and on the other hand the right kind of brain scanning equipment and i think incremental approaches toward you know brain like hardware or wetware and toward really spatially accurate brain scanning, I think those advances would lead to a lot of amazing incremental achievements. I mean, making advances in that kind of hardware or wetware.
We'll probably do a lot of funky, narrow machines doing robot control or perception or something. Advances in brain scanning would lead to amazing progress in understanding how the human mind works and diagnosing diseases of the brain and so on. So I think incremental progress in that direction can be valuable.
for reasons other than building ai systems and you know it might be valuable for building animal level ais too right like if you want to build an artificial rodent or bug or something then progress is scanning organisms and emulating them can be pretty interesting also so i i don't think that's a dead end by any means it's super interesting i just think right now that's about the research on supporting technologies rather than about mind uploading per se or brain emulation per se
Yeah, I think that you make some good points. And that suggests that we're likely to see both tracks gradually attract more and more resources as we get to it. I would say brain scanning needs a breakthrough. It needs a radical breakthrough in imaging or else a radical breakthrough in extrapolating the dynamics of the brain forward, given a static snapshot. It's not as obvious that AGI needs a radically different sort of technology than what now exists.
it might but it's not obvious as it is for for brain emulation i think people who aren't building the technology but who are speculating about it they like the brain emulation idea because it's a proof of principle right it just like the bird is a proof of principle that a flying machine can be built out of molecules but then as we all know that
The best proof of principle isn't always the best way to build something. And nanotech is a good example of that. I mean, in Eric Drexler's initial books on nanotech... I mean, he was making all these pictures of gears and pulleys and nanostructures that look like big machines that we make. And that's the right approach to take if you want to make a first proof of principle that, hey, nanomachines could exist.
But now it's becoming clear the way nanotech is probably going to work is a bit more molecular biology-ish. But that's a harder way to work out details in a proof of principle type way. but it may be a more practical way to actually make the machines work. So I think it's not bad if you're a proof of principle system.
ends up totally different than the system you actually build. But of course, also, mind-blowing is interesting to us because we're humans, right? And even if that's the most awkward way to make AGI. from the perspective of making the most intelligent possible systems, the fastest or the cheapest. I mean, from our particular position as humans, I would love to have...
you know, a mind upload of Friedrich Nietzsche or Philip K. Dick to play ping pong with and of myself for that matter. So I don't have to be stuck in this body forever. So there's... There's interesting value to that apart from how good an AGI approach it is. With my AGI hat on, however, yeah, I'm more drawn to more heterogeneous approaches that...
you know, leverage what we know about the mathematics of cognition and leverage the current hardware that we have to a greater extent than a brain emulation approach can take. I mean, the current hardware we have... is very little like the brain unless you take a super high level of abstraction. I mean, we now understand a lot about how to do some types of intelligent activities like theorem proving or arithmetic or database.
look up way better than the human brain does. So it's interesting to think about how to create AGI in a way that leverages this hardware that we have and this knowledge that we have. while also leveraging what we do know about how the brain works. And that leads you to a more opportunistic approach to AGI where you say, well, how can we put together the various technologies we have now?
including brainish and non-brainish technologies, to take the best stab at creating an intelligent system and a system that can move from narrow AI++ toward general intelligence. So, Ben, you've been working on the OpenCog project for a number of years, which is an approach to AGI that's quite different from the deep learning approach we hear about so much about in the media. Could you tell us about OpenCog, the history of it and what it is?
So the history of OpenCog goes back before OpenCog. I mean, it goes back to the mid-90s when I started thinking about how you could make an AI, which was a sort of agent system that was a society of mine.
as Marvin Minsky had described it, but with more of a focus on emergence. So Marvin Minsky viewed the mind, the human mind and an artificial mind as a collection of AI agents that each carried out its own particular form of intelligence, but they all... interacted with each other, much like people in the society interact with each other, where then the intelligence came out of the overall society.
I like this idea because I like self-organizing systems and I thought this sort of self-organizing complex system might be the right kind to get mind-like behaviors. But when I dug into it more, I realized Marvin Minsky didn't like emergence and he didn't like nonlinear dynamics, at least not in an AI or cognitive science context, whereas I was viewing the emergent level of dynamics.
and the emergence of overall structures in the network of agents as being equally important to the intelligence of the individual agents in the society. I tried in the late 90s to code myself a system called WebMind. which will be a bunch of agents distributed across the internet, each of which, you know, tried to do its own kind of intelligent processing and where they all coordinated together in a way to yield emergent intelligence. We did some amazing prototyping in the company in New York.
WebMind Incorporated. But then we failed to make a success of our business in spite of decent effort and ran out of money when the dot-com boom crashed in 2001. And I then started building a system called the Novamente Cognition Engine. much of which was eventually open sourced into the OpenCOG system. And I would say OpenCOG and then the Singularity Network, each of these reflects different aspects of what we were trying to do in WebMind.
WebMind was really a bunch of agents which were sort of heterogeneous that were supposed to cooperate to form an emergently intelligent system. Now OpenCog, we tried to control things a lot more. So we have a knowledge graph, which is a weighted labeled hypergraph. called the atom space with particular types of nodes and links in it and particular types of values attached to some of the nodes and links, such as truth values and attention values of various sorts.
Then we have multiple AI algorithms that act on this atom space, dynamically rewriting it, and in some cases, watching what each other do and helping each other out. in that rewriting process. So there's a probabilistic logic engine called PLN, probabilistic logic networks. It was described in a book from 2006 or so. There's MOSES, which is a probabilistic evolutionary program learning algorithm that can learn little atom space subnetworks representing executable programs.
ECAN, economic attention networks, that propagates attention values through this distributed network of nodes. And then you can use deep neural networks to recognize perceptual patterns or patterns and other sorts of data. and then create nodes in this knowledge graph representing subnetworks or layers in the deep neural network. So you have all these different algorithms cooperating together on the same knowledge graph and the concept of cognitive synergy.
I coined that term to refer to the process by which when one AI algorithm gets stuck or makes slow progress in its learning, then the other AI algorithms can understand something about where it's got stuck.
So understanding something about its intermediate state and what it was trying to do, and then can intervene to help make new progress that can unstick the AI algorithm that got stuck. So if a reasoning engine gets stuck at its logical entrance, maybe... evolutionary learning can come in to introduce some new creative ideas or perception can introduce some sensory level metaphors you know if a deep neural net gets stuck at recognizing what's in the video
It can refer to reasoning to do some analogy inference, or it could refer to evolutionary learning to brainstorm some creative ideas. To make an open cog system... You need to do a lot of thinking about how these different AI algorithms can really cooperate and help each other acting concurrently on the same knowledge store. It's different than like a modular system where you have different modules embodying different AI.
algorithms with a sort of clean API interface between the modules. We don't have a clean API interface between the OpenCog modules. The design is more that these different AI algorithms are cooperating in real time on the same dynamic knowledge graph, which is then stored in RAM. There are some resemblance to what was called a blackboard system in the 80s a long time ago, but the blackboard is this dynamic in RAM weighted labeled hypergraph, and the AI algorithms are all uncertainty savvy.
And, you know, interacting largely on the level of exchanging probabilities and probability distributions is actually different pieces of knowledge. And so that, I mean, the focus on graphs and probabilities is different than Blackboard systems had way back decades. ago.
Talking about cognitive synergy, you know, I've taken that idea from you guys and thought about it in a bit more real-time dynamic fashion, where not only can one set of algorithms attempt to solve a problem and another one is... stuck on, but also think about bi-directional problem solving. For instance, taking from human cognitive or animal cognitive science, it certainly appears that high-level clues flow back from higher levels of the mind to the perceptual.
system, for instance, when the upper levels of the perceptual stack are trying to identify an object. For instance, the scene at a higher level makes more sense with certain objects than with others. And that's something I generally don't see in most. current deep learning projects but i see open cog being very well structured to do that kind of real-time dynamic multi-level processing bi-directional
Yeah, yeah, absolutely. I mean, the concept of cognitive synergy was always intended to be bidirectional and concurrent with multiple AI algorithms helping each other out at the same time in cycles and complex networks. And I tried to formalize the cognitive synergy notion using a mix of category theory and probability in the paper at one of the AGI conferences.
I mean, that brings you in a whole direction of mathematical abstraction. The example you mentioned certainly is an important one. And I'd say in the neuroscience analogy... Current deep learning networks for vision or audition probably model well what the human brain does in less than half a second or so. And when you take longer than that to perceive something.
It's often because you're using cognition in some form to disambiguate or interpret and bring some background knowledge on interpreting the perceptual stimuli. That's something current deep neural nets don't really try to do. I mean, of course, you can attempt it in a neural net architecture by taking a neural net with a long-term declarative memory and long-term episodic memory and like co-training them in some way with a perceptual neural network. But that's not something being worked on.
a lot and we've been working toward doing that sort of thing in open cog where you have a symbolic cognitive engine atom space and pln and so forth interacting with the deep neural net for doing perception we've done some simple experiments in that regard and there's been a lot of
tweaking we've had to do to the open cog atom space framework not so much the atom space as a representational tool but to the pattern matcher which is a key operational tool on top of the atom space so a lot of tweaks we've had to do the pattern matcher to make it interact effectively in real time with deep neural networks. But we're mostly through that now and have been doing some interesting experiments in the direction you've suggested.
I think this can be valuable in natural language processing also where, I mean, we have Bert, Ernie, these various deep neural net architectures doing a great job.
at recognizing various complex statistical patterns in natural language texts so that they can then you know emulate natural language in a way that looks realistic in the short run yet Clearly, they're not getting the overall meaning of a document or the deeper semantics of what's being said, which you can see in the meaningless aspects of their generated text over a medium scale of... text line. So we've been working on combining deep neural nets for text analysis.
with a more symbolic approach to extracting the meaning from text. So I think this sort of neural symbolic approach to AI is going to be very big three or four years from now because deep neural nets... in their current form are going to run out of steam. I mean, there's still more steam left.
Because real-time analysis of videos and videos with audio on your mobile phone's chip, that's not yet rolled out commercially, right? So there's more steps to be followed kind of straightforwardly by just... incrementally improving and scaling up current deep neural nets. But I think once enough sensory data processing has been milked using these deep neural nets, we're going to hit a bunch of problems that need more abstraction.
And I'm guessing that tweaking deep neural nets to do abstraction is not going to be the step forward. Now, there may be many paths forward. You could perhaps take a multiple neural net architecture where some of the neural nets... have a totally different architecture than the current hierarchical deep neural networks. I mean, Google went a little bit in that direction with differential neural computers, and there's another papers like that.
But I'm guessing that interfacing and synergizing logic systems with neural nets is going to be a much less obscure thing in a few years. And we're already seeing more and more papers coming out on hybridizing knowledge graphs with deep neural nets. So once you've done that, I mean, a logic engine is a natural way to dynamically update a knowledge graph.
And I think you're going to see more and more powerful logic engines used on the knowledge graphs being hybridized with deep neural nets until people rediscover that predicate logic and term logic are interesting ways to manipulate knowledge graphs. And OpenCog may well get there first before others. I mean...
We're already there in terms of the design. We may well get there first in terms of having amazing results on cognitive enhanced perception before others get there by sort of incrementally adding more and more cognition to their deep neural net architecture. but... Whether it comes from OpenCog successfully integrating deep neural nets, that's going to be one way or the other, right? We're adding deep neural nets onto OpenCog and working on using them to feed knowledge into our logic engine.
But the deep neural net guys have started adding knowledge graphs onto their neural nets and I'm sure they're going to be adding more and more logic-ish operations onto their knowledge graphs. So we're going to see a convergence of these two approaches and it's going to... lead to maybe a convergence of those two into one approach, or maybe just into a whole family of parallel approaches, each of which is learning something from each other.
Seems to me the most likely. Let's hop back a little bit to something you mentioned in passing when talking about open cogs, symbolic and sub-symbolic working together. You talked quite a bit about cognitive... informed perception. But the other, to my mind, really big gateway problem that maybe your approaches are knocking on the door of is language understanding. When I look at
how we move from narrow AGI to near AGI, one of the biggest barriers seems to be real language understanding, whatever that actually means, right? I know you guys have done some work on that. talk about your thoughts on language understanding and where your projects are and maybe a little bit about where you think other people might be?
I think language understanding is quite critical to the AGI project. I would say probably the most critical thing that we're working on now toward AGI is meta reasoning, reasoning about reasoning.
we can come back to that language understanding is certainly easier to understand and we have been working on language understanding in open cog for some time and they're just now playing with hybridizing that with deep neural nets in very So for syntax parsing, I mean, we're working on using a combination of symbolic pattern recognition and deep neural nets to guide the symbolic pattern recognition. for automatically learning a grammar from a large corpus of text.
And then to map grammatical parses of sentences into semantic representations of those sentences in a framework like OpenCog, which has a native logic representation as... part of its representational repertoire, then this semantic interpretation task becomes a matter of learning mappings from parses of sentences, syntax parses of sentences, into logical expressions. representing key aspects of the semantics of the senses.
I mean, semantics is a big and rich thing. The semantics of a sentence may involve memories and episodic memory, images that are evoked. It can involve a lot of things. But I believe that a key and perhaps core aspect of the semantics of a sentence is, in essence... a logic expression or something homomorphic to a logic expression. So you can think of semantic interpretation as mapping a syntax parse into a logic expression.
plus a bunch of other things like images, episodic memories, sounds, and so on that are sort of linked from or ornamented on this logic expression. So how do you learn the mapping of syntax parses into logic expressions that sort of... part two of the language understanding problem and part three then being pragmatics how do you map the semantics into the overall broader context which i think is well treated as a
a problem of association learning and reasoning. In OpenCog, we really deal with the syntax learning task, the syntax systematics mapping learning task. And then the pragmatics learning task in somewhat separate ways, although all using the same underlying repertoire of AI. algorithms and we've been focusing more on syntax learning recently and focusing on unsupervised language acquisition where we're trying to make a system that automatically learns a dependency grammar.
which can then be fed into the link parser, which is a well-known grammar parser out of Carnegie Mellon University, to parse sentences. We're making decent progress there. I mean, it gets better and better each month. We're able to parse more and more sentences with greater and greater coverage. We're not yet at the level of supervised learning based grammar parsers that you feed.
a corpus of diagram sentences into. But one interesting thing we've been playing with is if you start with just a little bit of supervised data. not even parse sentences, but start with a little bit of chunk parsing information, like mapping of linked collections of words in a sentence into semantic relations, a very small part of your corpus.
So if you start with just a little bit of partial parse information and then use that to seed your unsupervised learning, you can get much more accurate results. And I think this is interesting because what I think unsupervised learning... is a great paradigm i also think we don't have to be total purists about it i mean starting with
Like whole complex parsed sentences, like in the Penn Treebank parsed sentence corpus or something feels wrong to me. But if you can start with like simpler semantic or syntactic information. about 10,000 sentences or something, a very small percent of the sentences you're looking at. maybe that's okay we're having a pure unsupervised approach and that brings us into how you learn semantics because one approach learning semantics is to look at
say, captioned images as a data source. Because if you have images with captions, I mean, then you can use a neural net or neural net connected to OpenCog, say, to recognize the relationships between what's in the images.
And then if you correlate that with the syntax parses of the sentences that are captions to the images, then you're connecting what the sentence is about, which is the image, with the... syntax structure in in in the image right so suppose you have you know 10 000 or 100 000 captioned images and 10 million sentences without captions then you know if you take the
supervised data that comes from the captured images, which is supervised in the sense that you have some semantics there in the images, which is correlated with those senses, and then the unsupervised data of all the other senses in your corpus. Now, if you can make something work from that, that's also very interesting, perhaps just as interesting as pure unsupervised grammar learning, right? So we're playing with both of these approaches, pure unsupervised grammar induction.
And then grammar induction, where you have a small percent of your sentence corpus. which has links into something non-grammatical. And that's, I mean, that's sort of how people do it, right? We have some senses that we had a non-linguistic correlate to.
And then a lot of sentences that we don't have a non-linguistic correlate to, but we have to figure it out and do the sematic interpretation. I was going to say that we have an existence proof that the one existence proof we have for language learning. is a mixture of supervised and unsupervised, right? How a child learns. You know, they are babbling away at random initially, and then they get feedback. Mom smiles when you say ma, for instance, right? And as people have done more and more.
research, there's clearly a lot of feedback. It's an existence for cross-modal learning with very crude reinforcement-based supervision. which is different than either unsupervised corpus learning or supervised learning in the sense of computational linguistics where you're fed sentence parses of the sentences, right?
how kids learn is different than either of those and i mean this really gets into the agi preschool idea that i had proposed a long time ago i mean this gets into the idea of you know, little kids are, A, they're just observing a huge multimodal world in an unstructured way, and B, They're trying to achieve goals in particular contexts by choosing actions to achieve those goals. And they're learning linguistic along with non-linguistic action and perception patterns in the context of...
You know, practical goal achievement in particular contexts, which is just different than unsupervised grammar induction or supervised grammar induction or studying captioned images or whatever, right? So I tend to take a sort of heterogeneous... and opportunistic approach here. And there's a lot to be learned from unsupervised grammar induction. We've learned something from supervised grammar induction too, but I think we may have learned what there is to be learned from that.
There's a lot to be learned from studying captioned images. There's something to be learned from robots, even as crude as they are now, or game characters. All the available learning paradigms are a bit different than what, you know, little human kids are doing. And we may need to do something more like what a human baby is doing, or it may be that by piecing together these different...
more computing-friendly learning paradigms, we're going to solve the problem, right? I mean, we don't know, and there's a lot of interesting experiments to do. I think for semantics... In the end, you need to apply learning between grammar and sentences that are parsed using your grammar.
and a non-linguistic domain. And, you know, that non-linguistic domain could be logic expressions gotten from somewhere, like a parallel corpus of English in Lojban, which is a... A language speakable by people that has syntax that's directly parsed into predicate logic. Or it could come from correlation of English senses and images, like in a captioned image corpus, or movies with dialogue or closed captions associated.
It could come from a robot that hears stuff in the context of the environment it's trying to act in. But I mean, clearly, although much semantics that we use is very abstract, like if we're talking about quantum... mechanics or continental philosophy.
You know, the basis for the abstract semantics that we learn is earlier stage semantics that comes from correlating linguistic productions with sensory non-linguistic environments. So I think... you know, pattern mining across linguistic productions and non-linguistic sensory environments or situations.
This is how you have to get to semantics. And then, of course, pragmatics is the same way. But I mean, there you're correlating linguistic productions with whole episodes. And then you're looking at perception. goal-based processing across the episodes. Yeah, Ben, let me jump off from that into something that just come across my desk recently, which was a paper by Jeff Klune.
He had three pillars of how to go after artificial general intelligence. The first two I thought were Not that new, but the third I thought was quite interesting and perhaps relevant to this language understanding problem, which is to develop technologies for generating learning environments. Could we, for instance, somehow, miracle occurs, be able to develop language learning environments that had embedded within them pragmatics and semantics and were supported by natural language syntax?
And we're able to generate cases for these language learning systems to use, to use multilevel learning the way humans seem to. As you point out, neither... classic unsupervised nor classic computational linguistics provides that multi-level. problem-solving framework, which is what we know that children do. Does this seem to you a fruitful approach perhaps for language learning? I mean, it's interesting. I would say at a crude level, we were generating learning environments.
I don't know what, eight or seven or eight years ago. And I'm sure other people were well before me, right? I mean, because anyone who has tried to do AGI oriented or transfer learning oriented learning in the video game world, usually they don't have a lot of game world. authors on their team. So they end up writing scripts to generate stuff in the game world from some probability distribution. So I think that that practice is not remotely new, but the paper that you're referencing sort of...
highlighted it as a key portion of an AGI approach with greater oomph and rigor than had been done before, which is potentially valuable. I mean, I think I'm a... I don't think we yet have the models that we would need. We don't yet have the discriminative models we would need to generate an interesting enough diversity of learning environments, but it would be certainly interesting to do. I mean, the...
World that we live in, you know, it has a weird and diverse collection of patterns in it that we're just not at in game worlds yet. Like, so tonight... You know, I baked cookies with my wife and my 16-month-old baby. We baked one batch of peanut butter cookies and one batch of cookies with another type of nut butter. One of them was much thicker than the other before we cooked it.
And that results in a different consistency after we cooked it, right? And I mean, the baby is learning from that. And there's endless things like that. in our everyday life like different kinds of mud that you walk through the way the beach near my house looks at high versus low tide and how that affects the behavior of the animals and plants there.
You know, each example like this seems like a kind of irrelevant thing, but the point is we live in a world with an endless abundance of these weird little distinctions, which, however, are carefully patterned on many different levels, right?
We're just not getting that in game worlds now. These are multi-sensory. It'd be hard to get them from videos unless you had a... heck of a lot of videos and many different scales and space and time so i i sort of i wonder if the best attempts we had to do that in a game world now
would still be too sort of rigid and limited in their variety to drive the... know the abundance of examples that humans have in their minds from which we draw analogies to do our abductive inference that drives our our transfer learning so as as a concept it's great I'm not yet convinced that we have the models needed to generate diverse enough environments to really do AGI. Now, weather, of course.
generating game worlds with diverse environments is cool for prototyping and experimenting. And that's been done by a lot of people over a long period of time. But if we broaden the idea a bit, I mean, I think the idea of... you know trying to use learning to learn more training examples to train your ai and then the smarter ai can then use learning to learn more training examples and create more training examples
That's interesting. I mean, I've been thinking about that in a mathematical theorem proving context where one issue you have there is the number of theorems that human mathematicians have created. in the history of math is not that large when compared to the requirement of deep learning algorithms and other machine learning algorithms. We don't have as many theorems as we do sensors or images, right? But yet the complexity of theorems is probably greater than that of senses or images.
So on the other hand, just generating a bunch of random true theorems is not useful because they're boring, right? So the idea there is if you had a rule to identify what's an interesting theorem... then you can generate trillions of interesting theorems. Have your theorem prover try to prove them. It would fail a lot, but maybe you could prove billions of interesting theorems.
then the proofs of those billions of interesting theorems are training data for machine learning to learn how to do proofs, right? So there's a similarity there to the game world idea, right? You're wanting to... you know, study what you have to generate new training data, use that new training data to train AIs, which then gets smarter and you can use too.
to generate even better training data and lather, rinse, repeat. So I think that idea could be interesting in game worlds. It could be interesting in theorem proving and in a lot of places. It's a question of whether you have enough requisite variety in your available data to seed that initial stage of automated environment generation.
Interesting. All right. That's, I think, an area well worth people studying. I've been passing Klune's paper around trying to get people interested in it, sounds like. you agree there's at least some merit to it. Let's move on to another item closely related, which, you know, at some level, dealing with robotics is a pain in the ass, right? Stuff breaks, you know, they fall over, they have to...
deal with the holes in the ground, et cetera. However, when working with robotics, you get an amazingly detailed simulation for free, i.e. the universe. Could you talk a little bit about the intersection between AGI and robotics and what you think robotics brings or doesn't bring or the mixed bag of things it brings?
I would have to add that programming is also a pain in the ass. I mean, we're just choosing between pick your poison, right? None of these things are not annoying once you really dig into them. I mean, writing a simple Python script is one thing, but building OpenCog is not always that.
simpler entertaining either there's a lot of kinds of pain in the ass and it is true the closer you get to the real world maybe that maybe the more painful things become but I mean dealing with huge image corpuses and video corpuses is also highly there's a lot of butthurt there too right so anyone who's going to work on AGI
in any aspect, he's got to have a very high pain tolerance. Otherwise, he'll do something that gives more immediate gratification and pays you more money quicker, right? But robotics, I mean, the main issue isn't that tinkering with robots is a pain. The issue is that the current robots don't so easily do what you want them to do for AGI. Like what you want is a robot that can move around the everyday human world freely.
Battery runs for at least a day on end before you have to... have to recharge it is gathering you know multi-sensory input even if there's glare or dim light or background noise or something i mean basically that's doing what a little kid does it rambles around your house it looks and hears and sees it grabs stuff and manipulates it and picks it up and puts it down you may not let him run up and down the stairs and you may not be able to pick up everything as heavy as you can
But he's manipulating and perceiving and moving a lot, right? And right now, we don't quite have that. toddler robot yet you know if you put all the pieces together if you put together boston dynamics movement with hansen robotics emotion expression and human perception with the you know the arm eye hand coordination of icub and the fingertip sensitivity of Sintouch. If you put together all the robot bits and pieces that exist now in various robotics projects around the world.
you would have that artificial toddler, but no one seems to be funded to do that. So we have all the pieces now, but no one has put together that toddler robot for AGI yet. To do so would be expensive, right? And to do it cheaply is also probably possible, but that involves a lot of engineering R&D, which is going to take years. To do it expensively is going to cost you at least hundreds of thousands of dollars for the first.
toddler and so that's that's what's really a pain is that you're dealing with one or another robot that's very limited in what it can do either in you know manipulation or mobility or perception or battery life or something right and that's a bit ironic because in theory the robot should be much more flexible than ai in a virtual world in practice given the limitations of each robot
It's so limited what you can do with each robot. But I think, you know, this is like years, not decades away from being resolved. It's mostly a matter of integrating components that exist now. and bring down the costs through scaling up manufacturing. So I think within three to eight years, let's say robots are going to be a really useful tool for AGI, although they're...
That's only very weakly true right now. Of course, in principle, for AGI, you don't need a robot, right? I mean, in principle, you could get a superhuman supermind living on the internet. There's loads of sensors and actuators attached there. But if we want a human-like mind, for it to have a roughly human-like body is going to be pretty valuable because so many aspects of the human mind, I mean, they're attuned to having a human-like body.
That's what we are. And that's simple things, relatively simple things, like how we do eye-hand coordination by combining movement and perception and lower level cognition, but also... you know, little things like the narrative self and the quasi illusion of free will and the relation between self and other. Like all these things have to do with being agents controlling a body that feels pain and has an inside and an outside and so on. And I mean, you know, when you eat.
And then you shit or you put something in your mouth, squish it and spit it out. All these things teach you something about the relationship between yourself and the world and persistence of objects. You get all those lessons. in a different way if you're a distributed supermind whose mind and body are the internet. But how much are you going to understand human values, culture, and psychology that way? I don't know, right? So there's one question, which is...
How important is embodiment for getting a really smart AGI? The other question is, how important is embodiment for getting an AGI that understands what humans are and empathizes with humans to a significant extent? That's interesting. I had not thought of that particular additional benefit from doing the embodied cognition. Even if it wasn't necessary to get to AGI, it might make an AGI that's much more relatable to us and us to them.
Let's move on to the next subject I want to talk about, which is your SingularityNet project. This is really interesting, the idea of building a very broad network that anybody can build AI components on. They can work with each other, et cetera. Why don't you give us a good detailed description? of SingularityNet and what its status is currently and what you see going forward. Absolutely. So SingularityNet, it manifests.
some ideas i've had for a long time and that i was prototyping in the webmind project in the late 90s because webmind was a you know a distributed network of sort of autonomous agents cooperating together to manifest some emergent intelligence OpenCog took some of those ideas and tried to make them sort of more orderly and structured, where you have a very carefully chosen set of AI algorithms acting in a common knowledge store.
much more carefully designed and configured to work closely and tightly together than anything in WebMind was. SingularityNet goes the other direction. It's like, let's take these different AI agents. Take a whole population of AI agents, each doing AI in their own way, and they don't necessarily need to know how each other are working internally. They can interact via APIs. If they want to share state, they can do that too, but it's optional.
And then this society of minds can have a payment system where the AIs in that society pay each other for work or get paid by external agents for work. So this society of minds is an economy of minds and the economic aspect can be used to do. you know, assignment of credit and assessment of value within the network, which is an important aspect of cognition as well. And then, you know, this economy of minds is both. It's another approach to getting emergent AI where you have.
a more loosely coupled network of AI agents so you have an open cog but which can still manifest some emergent cognitive dynamics in its collective behaviors. But then also, you have potentially a viable commercial ecosystem wherein anyone can put an AI agent into this network and the AI agent can then charge external software processes for services.
And the AIs and the network can charge each other for services. And then, you know, this becomes a marketplace. And it's, however, the infrastructure that we've chosen to implement this.
was based on the idea that this agent system is a self-organizing system without a central controller, just as, I mean, the brain doesn't have a central cell, right? I mean, the brain has some aspects that are in more of a... controlling role than others, but it's a massively parallel system where each part of the brain is, you know, getting energy on its own and metabolizing and in some sense, guiding its own interactions.
We use blockchain as part of the plumbing for the singular infrastructure to enable a bunch of AI agents to interact in a way that has no central controller. but is, you know, heterogeneously controlled and the AI agents in the network sort of in a participatory democratic-ish way. They interact and guide the interaction of the overall network. And I think this is both a good way to architect a self-organizing agent system moving from general intelligence toward AGI.
And it's a very interesting way to make a practical marketplace for AIs, which potentially can be more heterogeneous in what kinds of applications it serves. and in who gets to profit from the AI that's done than the current mainstream AI ecosystem, which is highly centered on a few large corporations.
Yeah, the vision's obviously just astounding in that on two fronts. One, this idea that anybody can play, pieces can be added, there can be explorations by AIs to find out what other pieces of AI might... be synergistic to their own capabilities. The last point you made that this is an ecosystem. not controlled by the big boys is something I found very attractive when I first learned about SingularityNet, because we are in a world where it seems that deep...
progress in AI is being more and more concentrated into fewer and fewer hands. And SingularityNet looks like one of the relatively few countertrends to that concentration. Talk a little bit more about that and why you think it might be important. Yeah, I think the importance of a decentralized and open approach to AI is multifold, right? It's important right now because it can enable AI to do more good.
world than is going to be possible with the centralized hegemonic approach of seeing the industry now. It's also important if you think that, you know, the current internetwork of narrow AIs is going to evolve into tomorrow's emergent AGI because, I mean, if you look at... What's the current network of centralized AI is doing? I'd like to summarize it as selling, spying, killing, and gambling, right? I mean, it's advertising, it's making surveillance systems.
It's controlling weapons systems and it's doing finance, financial prediction, risk management for large banks and so on. So I'm in this, you know, selling, spying, killing and gambling are part of the human condition. We're not going to stamp them out. I don't know if we want to stamp them out, but I don't want them to be as large a percent of the AI ecosystem as they are right now.
You'd rather see more like educating, curing of diseases, doing science, helping old people, creating art, both because these are cool things to have around more on the planet now. And because if our narrow AIs are going to turn into AGI's, I'd rather have the AGI's doing these compassionate and aesthetically creative things.
than serving the goals of current corporations and large governments. So I think there's a near-term importance, and then there's a little more speculative importance in terms of the... potential of current narrow AIs in giving rise to tomorrow's AGI's. And if you look at how the current hegemonic situation has come about... It's not really because of bad guys, you know, it's because of self-organizing socioeconomic dynamics. I mean, the...
The Google founders are pretty good people. They're trying to make money like all business people are. They're also genuinely trying to build AI that will do good for the world. But I mean, the way they're doing it is the way a public company has to do it as they're trying to do good for the world as a side effect of generating shareholder value. And the way they're generating shareholder value is kind of the way they have to do it. They're taking whatever worked.
And they're just doubling down on it. I mean, they're trying a lot of side projects, too. But in the end, ads are what's making the money. I mean, it's their fiduciary duty to be using their best AI for that as much as makes sense. And if you look in China. I mean, Xi Jinping is doing some things I don't agree with. On the other hand, you know, Chinese government has lifted way more people out of poverty over the last 30 years than every other part of the world combined.
And I think the Chinese government is genuinely trying to advance the total utility of the Chinese population. And they see that AI is very helpful for this. And, you know, they think building a surveillance state is in the interest of the, you know, the utilitarian total good of the Chinese population.
We're not looking at psychopaths who are trying to develop AI to build the Terminator or something. We're looking at people developing AI in the interest of generally beneficial-minded goals. But there's these network effects involved. And that's both good and bad, right? I mean, the network effects allow a smart AI to accumulate more and more resources that can partly go toward building smarter and smarter AI.
But the network effects also mean that whoever succeeds first with narrow AIs, in particular tasks that lends themselves to rapid deployment of narrow AI for useful ends. Whoever succeeds first, you continue to accumulate a hell of a lot of money, a hell of a lot of data to train the AIs, a hell of a lot of processing power to feed the AIs. And there's a powerful network effect there.
And this is what Google has been benefiting from. I mean, the Chinese government is benefiting from this. Facebook is, Microsoft is. IBM, for example, isn't so much because they've been deploying AI in markets in ways that don't lead to these tremendous network effects. And that's caused them to fall behind in spite of having a lot of smart AI people and some AI technologies that are very good at. at certain things. I mean, one of the cool things with SingularityNet is
Intrinsically, at least, the logic of Singularity Net's economic model has a lot of network effects to it. It's a double-sided platform like Uber, Airbnb, or Amazon's neural net model marketplace. It's a double-sided platform in the sense that one side is the supply of AI that developers have put into the network.
Another side is the demand, which is product developers who want to use AIs in the network to fuel their products and end users, right? And so that's something that can be very powerful if you can get it off the ground. demand you can get more supply in the supply you can get more demand if you have this going enough it can take off really really fast so there's the possibility to use the same network effect trick that Google Facebook
you know, and then Tencent, Baidu, Alibaba, and so on have used to get this decentralized network of AIs off the ground. And, you know, it doesn't have to displace. totally the big tech companies have a huge impact. We can look at Linux as an example. I mean, Linux didn't obsolete Apple and Microsoft. They're still making money, you know, trillion dollar companies. On the other hand, Linux is the number one mobile operating system. It's dominant in the server market.
It's big and the open source ethos behind it has been huge. It's been hugely valuable for the developing world, for the maker and robotics community, right? So similarly, if we can get a decentralized AI network to have a major role in the world. way that AI is utilized in the world, I mean, then that's going to be tremendous, even if it doesn't obsolete the hegemony.
Growing that network with the double-sided network effect is certainly a practical challenge, right? And that's what we're working on with the Singularity Net platform now. One thing I'd like to pick up on the singularity in that model, you described how you have a potential network effect around a two-sided market.
Those are indeed very, very powerful business models. Much of my business career was actually around trying to build two-sided markets, and sometimes we were successful and sometimes we weren't. Usually it meant investing in building one side of the market. first. Could you talk a little bit about your go-to-market strategy for SingularityNet and how you expect to achieve a critical mass somewhere to get the two-sided market start to cycle?
There's a couple different strategies we're applying here, but I would say if it's one side of the market first, we're essentially focusing on building the demand side of the market first and then creating the supply. internally, initially, because we have a bunch of AI developers on the Singularity Net Foundation team who are largely people I've worked with on previous projects. And so we're able to build some AI.
ourselves to put in in the network and then we're we are doing ai developer workshops and we're putting some requests for ai on the platform and sending people with tokens to put new ai into the network so we're not totally neglecting the supply side but the demand side is getting more of a big push in a couple forms so one of those forms is
We're spinning off a for-profit company called Singularity Studio, which is a whole separate enterprise aimed at building commercial products aimed at the enterprise on top of the SingularityNet platform. And initially... We're building a product suite aimed at fintech and the finance industry. So say you'll have a risk management product, which then...
It would be subscribed to by a large financial firm to solve a problem in, say, hedging or credit risk assessment or something. But then on the back end... that product is getting its AI by making calls into the SingularityNet platform. So if this... Business succeeds in creating and selling, you know, successful products initially to financial services and after that other vertical markets, Internet of Things, health tech and so on. Then, you know, each product.
that sold the licensing fees for each year, a fraction that will be converted from fiat into AGI token and used to drive the AGI token based market in singularity. So that's one thing. the singularity studio which we're currently pulling together and we have some enterprise customers all ready for that i mean
A couple that are publicly announced, but there's a bunch more big ones that we're working with are going to be announced in the next few months. We've also launched an accelerator called Singularity Net X Lab, where we're recruiting... projects from the community that will build software products aimed at certain niches, again, using AI in the singularity net.
this again is focused on the demand for the network side because they're building products on top of singularity net but they also help with supply because in most cases the ai that we've put in there is being augmented by ai that that this team puts in there also and then the projects in the xlab incubator so that they're being you know we can give them some
tokens to help them get AI services on the network and we can help them with publicity using our PR engine and help them with AI expertise where we can. So these are both efforts. kind of brute force the demand side. We're also talking to some investors on scaling up this accelerator slash incubator effort where we'd be able to... put some more money into seeding projects, leveraging the platform, using some investment money raised specifically for that purpose.
If we get the Singularity Studio building enterprise products, then getting some large enterprises using through these products, the Singularity Net, and then we get some smaller entrepreneurial products from the XLab Accelerator, then...
you know we're getting some serious utilization of the token and if through these efforts we get serious utilization of the agi token And the AGI token-based ecosystem, you know, all of a sudden we have a utility token with actual utility, which is almost unheard of in the blockchain space, right? This is going to attract a lot more attention in the blockchain community and hopefully in the AI community. And I think this will then incent more people to put their AI into the platform.
We'll get some usage from the developer workshops we're doing, but in the end, I mean, having demand for the platform was sort of...
It will give some extra appeal to developers because then it's not only that they're putting their AI out there in a really cool, you know, decentralized, democratically governed platform, there's an actual... you know market of customers who will pay them to use their ai which is certainly the financial incentive only won't be enough for quite a while until it was a really huge market but uh
a decent financial incentive that's meaningful, combined with the cool value and the political appeal of what we're doing, I think we'll be able to juice up the supply side a lot. Well, this is a truly visionary project, which could change the world. There's a lot of will it works? Will you reach a critical mass? Does the token aspect get in the way or does it add?
value, all unknown, but it's certainly a very interesting experiment worth trying. Let's move on to another topic, which is thinking about AI and AGI. in the context of complex self-organizing systems, emergence, chaos, strange attractors. I know those are things you've thought a lot about and I thought at least a little about. So take it away, Ben.
Yeah, I think my focus on complex nonlinear dynamics and emergence is something that sets my line of thinking apart from the mainstream of the AI world now. Like I remember when... Jeff Hawkins' book on intelligence came out 15 years ago or something. Seems like forever, but I mean that...
That book laid out the vision of AI in terms of hierarchical neural nets combining probabilistic reasoning with backpropagation. I mean, it was different than the deep neural nets that are most successful now, but conceptually it was.
along very very similar lines and when i reviewed that book i what i said is well this is interesting it's part of the story but he's leaving out two key things he's leaving out evolutionary learning which we know is there in the brain in a sense, to Riedelman's neuro-Darwinism, modeling the brain as a devolving system.
And he's leaving out nonlinear dynamics and emergence and strange attractors, which you know are key in how the brain synchronizes and coordinates all its parts. Like these hierarchical networks doing learning and... probabilistic pattern recognition are there, but they're only part of the story. If you don't have evolution... and autopoiesis, like self-reconstruction and self-construction based on non-neurodynamical attractors, then you're really missing out a lot.
in the early 90s when i was first thinking about how the mind works i wrote a book called the evolving mind where I argued there are sort of two key forces underlying intelligence systems. I mean, in philosophy terms, they come down to being and becoming. So we're back to Hegel, right?
In dynamic terms, you can think of them as evolution and autopoiesis. Autopoiesis being a term introduced by Macerana and Varela, meaning like self-creation, self-building, which is... is one particular kind of complex nonlinear dynamics that you see in biology, where a system is involved with rebuilding and reconstructing itself anew all the time.
You know, evolution creates the new from the old and autopoiesis keeps, you know, an organism, a system intact in a changing and mutating environment. And nonlinear dynamics are key to both of these. You could also think of this as evolution and ecology, which are hand in hand in natural systems and even in the body where you have Edelman's neuro-Darwinism.
explaining brain dynamics as evolution and, you know, Jern's clonal selection theory explaining the immune system as an evolving system along with, you know, the cell assembly theory. which Hebb came up with to explain how the brain works. I mean, that's basically an ecological theory, and a cell assembly is a sort of autopoietic system that keeps constructing and rebuilding itself in the immune system, the network.
theory of Jernah is the the orthopedic ecological aspect so I mean you if you leave out ecology slash orthopediasis and evolution and you have only hierarchical pattern recognition you're leaving out a whole lot of what makes the human mind interesting like creativity is evolution and you know the self and the will and all these and
you know, the conscious focus of attention, which is binding together different parts of the mind into a perceived and practical unity. This is all about strange attractors emerging in the brain building.
you know, auto-poietic systems of activity patterns. So you're leaving out all this, like you do in modern deep learning systems. Well, you're leaving out a fuck of a lot about what makes the human mind interesting, granted that you're also capturing some... some interesting parts and not much study of thought is going into these aspects of the mind.
right now and this is partly because of the business models of the large companies and governments that are driving ai development because you know creativity and ecological self-reconstruction These aren't that directly tied to easily measurable metrics that you can use to drive supervisor reinforcement learning. So for a company that's driven by their KPIs and whatnot.
That company is naturally driven toward AI algorithms that are focused on maximizing some simply formulated reward function. That's a little harder if you're talking about evolution to create new things. or an ecological system whose goal is to maintain and grow itself.
It's the money-on-money return monster. Yeah, yeah, yeah, exactly. So it ties into, like, the neo-Darwinist view of evolution as nature, tooth and claw, thinks nature, red and tooth and claw. I mean, this... this view of evolution thinks evolution is about you know maximizing some simplistically defined fitness function when you look at evolution and ecology tied together in more of a non-linear dynamical view
You're going to be on the simplistically defined fitness function, and it's a little more fuzzy to grapple with. This goes back to something I observed a long time ago. I pushed forward the idea of AGI. But actually, it's a bad term from a fundamental philosophical view. Like, no one knows what intelligence means. Humans aren't really all that general.
I don't know what artificial is, really, because it's all part of nature. So I don't like the artificial, the general, or the intelligence. Really, what I'm after is self-organizing, complex, adaptive dynamical systems. I mean, that's just more of a mouthful. And it's kind of fuzzier to grapple with and conceptualize. This ties in with what my friend Weaver, David Weinbaum, called open-ended intelligence in his PhD thesis of that title from the Global Brain Institute in Brussels. I mean...
Fundamentally, if an AGI emerges out of the internet, of the conglomeration of a bunch of AGI systems, narrow AI systems, this may be an open-ended intelligence. which stretches our notion of what intelligence is, but is an incredibly complex self-organizing adaptive system. which in many ways has more generality than humans do, but isn't even about maximizing reward functions in any simplistic way, although maybe you could model some aspects of what it does.
like locally in terms of certain reward functions. That's a very interesting question on what is that thing? Is it a mind? It may not be conscious in the sense we... We think of ourselves as being conscious, but it may have some attributes that are in some way analogous to consciousness or in part a larger space of efficacy in the world.
which aren't necessarily congruent with consciousness, but nonetheless give it the ability to respond to feedback and find its way in the world. If you take a view like Chalmers takes in his philosophy of consciousness, What he says is everything in the universe has some spark of proto-consciousness. And I guess he...
He calls it proto-consciousness to try to make it more palatable to people with different perspectives on consciousness. So if everything in the universe is some spark of proto-consciousness in some form, then you would view... like a human-like or mammal-like consciousness as something that emerges from these sparks of proto-consciousness, you know, when they're associated with a certain type of information processing system, like a system that achieves goals.
in the context of controlling some localized embodied organism right so if you look at it in that way the sparks of proto-consciousness you know involved in a global distributed complex self-organizing dynamical system across the internet which doesn't have a central controller even as much as our like hypothalamus or basal ganglia are our central controller.
In some broader sense, maybe there is a variety of consciousness associated with it. Like there's some complex self-organizing pattern of proto-conscious sparks there, but it may not have the unity. that characterizes human-like consciousness. And so what, in a way, like this unity of our consciousness is there because our body is a unified system that has to control itself without dying, right?
And so that gives us some unified goals of like food, sex, and survival. If you have a different kind of complex self-organizing mind-ish system, which can replace its parts at will, and where the different parts are pursuing various... overlapping goals in a very dynamic way. I mean, whatever self-organizing conglomeration of proto-conscious sparks exists there may be much less unified than what's associated with the human mind brain.
And then is that better? Is that worse? I mean, that comes down to your value system. And yeah. The fact that our value system values this unified variety of consciousness-ish stuff more than this more diffuse... but perhaps a more complex variety of consciousness-ish being like, is that any more profound than saying like, we think human women look more beautiful than gorillas, right? Like we like what we have. Exactly. I will say I'm finding it useful to not.
co-mingle the concept of intelligence so strongly into this broader picture of mind. You know, the more I dig into consciousness, the more I appreciate John Searle. You know, I used to say Searle in his damn Chinese room. What misleading stuff. But the more I thought about it, the more I appreciate it. Searle argues that our consciousness is.
Our consciousness, human consciousness, is very specific to the way we are organized in terms of our memories. They couple on various time frames. We have perhaps something like Bernard Barr's Global Workspace. and how all those things work produce something called consciousness. And as Cheryl likes to say, consciousness is a lot like digestion. You can't point to some part of the body and say, that's the digestion. Digestion's a problem.
that includes our teeth, our throat, our stomach, our colon, our liver, etc. And consciousness can be thought of the same way. And so, you know, Searle has been... laughed at by some AI researchers when they interpret him as saying that machine consciousness is not possible. understand Searle's argument better, I think his argument is that something that is analogous to human consciousness in a machine.
can't be the same because its details of its design won't be the same. In the same way, in the food industry and the pharmaceutical industry, we have digesters, which are analogous to what our digestive system does. but don't do it the same way and are very, very different with respect to the details. So I'm starting to use consciousness in a narrower frame as something that is more like... what humans are. And so...
I'm willing to buy that will at some point have things that are sort of like our consciousnesses in a machine. But that's only a small part of the much bigger mind space and the things that you were talking about. You know, what is the. What is it like to be a loosely coupled set of intelligences running across the internet solving many problems both in serial and in parallel might be nothing at all like intelligence.
we have a tendency to anthropomorphize from our consciousness to what these larger brain... mind, I should use the word mind types might be. And I'm increasingly finding that attempt to expand the concept of consciousness unhelpful, actually. Yeah, I guess, you know, Weaver's concept of open ended intelligence was created to. you know, deal with intelligence in these broader types of dynamical systems beyond anything human-like or mammal-like.
Whether you want to extend the word intelligence to deal with these different types of dynamical systems or have a different word for it is that, yeah, that's the kind of issue that I never worried about much as a mathematician originally. My attitude is like, you can define your word to mean whatever you want and then use it that way. So, I mean, whether that really is intelligence or not, or is some other broader thing.
terminology choice regarding consciousness i think you and i don't entirely see it the same way but vaguely close. I mean, when I wrote a paper on consciousness, basically I called it characterizing human-like consciousness, I think, because I considered that a sort of separate problem. I mean, one type of problem is understanding.
the nature of consciousness in general, which is interesting. But another type of problem is the consciousness of human-like systems, where then you have to define what you mean by a human-like system. But I was thinking of... you know if you have a system whose goal is to control you know a mobile body in a much larger and more complex environments and then you know the the goals involve orchestrating actions over various timescales. I think, yeah, these requirements...
sort of drive you to a narrow subset of the scope of possible cognitive architectures. And you could say then they lead you to some of the aspects of human-like consciousness, I think. Where we might differ is, I mean, I think Searle was arguing something stronger than what you're saying. And I think Searle was arguing in essence that the qualia are there.
in the human being, in the Kwellia, wouldn't be there in the digital computer. And maybe since you don't take Kwellia seriously, maybe you're ignoring that aspect of Searle's argument. Whereas I do take... qualia seriously but I think that qualia are universal I'm a panpsychist so then the way I look at it is
You know, the elementary qualia associated with every percept, concept, entity, Whiteheadian process, whatever, you know, these organize themselves into collective system level qualia differently, depending on the kind of system. So they... human-like species or variety of consciousness, that variety of experience is associated with systems that are organized like a human. And it's not clear that...
At the level of description Searle was using of like what words come in and out of the box of the guy in the Chinese room, it's not clear you could distinguish what is the state of consciousness of the guy inside the room. So, I mean, his point there was sort of if you have some giant lookup table or deep neural net or whatever inside a box that's acting like a human, it doesn't have to have the same conscious state as a human. And I think that's...
That seems true to me, but that just means that that level of observation, the external data doesn't let you reverse engineer the internal state, right? I mean... It's amusing that quantum mechanics would. Like if you studied all the vibrations of the elementary particles in the universe, you can induce the state of mind of the guy inside the box, right? But from the verbal productions, I guess you couldn't.
But I mean, I don't know if that proves what Sura wanted it to prove, really. It just proves that the functional description at a crude level doesn't imply the internal state. But if the state of consciousness, if the qualia are associated with the internal state and dynamics, and not just the crude functional description, then so what?
I'm not sure. But I think, as you know, we do disagree about this a little bit. And at some degree, I do expect that when we fully understand consciousness, we're going to say, so what? It's not. It's less amazing than we thought. However, I do believe it is cerulean in the sense that one would not expect consciousness from the Chinese room because consciousness is the experience of processing information.
information in a specific architecture. And I strongly suspect it has to do with the couplings of our memories on various time frames. Yeah, human-like consciousness is that. But then, I mean, you're bypassing the question of whether a tree or a rock or the Internet has some form of qualia, some form of awareness or experience. A tree, I don't know about a rock, but a tree could have information flows with storage and sense of continuity, perhaps. You can't distinguish physics from information.
Every physical system can be equivalently viewed as doing information processing. So, I mean, that distinction sort of isn't there on the math level. Yeah, no, it's a much simpler kind of information processing, one could argue. Well, anyway, this has been a truly interesting conversation. So thank you very much for a very interesting conversation, Ben. And I look forward to seeing the work you do going forward.
Production services and audio editing by Stanton Media Lab. Music by Tom Muller at ModernSpaceMusic.com.