In the latest episode of the Spatial Web AI Podcast, I sat down on August 7, 2025 with Mahault Albarracin, PhD, Director of Research Strategy annd Product Innovation at VERSES AI and David Bray, PhD Distinguished Fellow with the Stimson Center and expert in global policy and governance for a groundbreaking discussion that pulls back the curtain on the real-world applications of Active Inference AI and the Spatial Web.
While generative AI has captured public attention, the next wave of intelligent systems is quietly emerging through Active Inference — a framework grounded in the Free Energy Principle, pioneered by world-renowned neuroscientist, Dr. Karl Friston — Chief Scientist at VERSES AI.
VERSES’ most recent Active Inference breakthroughs are consistently wiping the floor with the most state of the art Reinforcement Learning/Deep Learning flagship models in the world of AI, with their AXIOM model crushing Google Deepmind’s DreamerV3, and their latest research paper spotlighting their entirely new Active Inference architecture for robotics as demonstrating superior performance in Meta’s Habitat challenge. (See results charts at the end of this article.)
Until recently, much of the conversation around Active Inference and the Spatial Web has centered on theory and high-level vision. But in this episode, Mahault gives us a rare, behind-the-scenes look at the specific use cases that the VERSES research team is working on right now. For anyone wondering how these technologies manifest in practice, this conversation is one you won’t want to miss.
YouTube: https://youtu.be/YYyca7dzYoA
Spotify: https://open.spotify.com/episode/3DV2jXLHgdhYstofsMkNRp
Mahault highlights how the research team is applying Active Inference and the Free Energy Principle to real-world challenges. From robotics control systems that can learn and adapt in unfamiliar environments, to intelligent logistics networks capable of predicting and responding in real-time, these examples showcase how VERSES is engineering intelligence that reasons, plans, and acts in context.
Most discussions of AI today revolve around large language models. While powerful, they lack the adaptive, real-time causal reasoning required for the next generation of trusted intelligent systems. Active Inference AI changes this equation entirely.
By grounding AI in the principles of self-organizing systems, VERSES is creating intelligent agents that can:
These aren’t abstract concepts anymore. They’re being built, tested, and deployed today.
David Bray adds an important perspective by exploring the societal and global policy implications of this technology. As Active Inference AI scales, it will impact governance, commerce, and the ways we interact with digital and physical spaces.
This episode captures an exciting moment: the bridge between research and reality. For leaders, innovators, and anyone tracking the future of AI, this is a chance to see what’s emerging directly from the VERSES research team — and to understand why these use cases signal the dawn of a new intelligence layer for the internet.
If you’ve been tracking the limitations of current AI — its energy demands, lack of causal reasoning, and inability to generalize — you’ll want to tune into our conversation. Active Inference AI represents a fundamental shift, one that’s looking to reshape how we design intelligent systems for society’s greatest challenges.
The future of AI is not just about faster models — it’s about intelligent, adaptive systems that can reason, plan, collaborate, and co-regulate. This conversation with Mahault Albarracin and David Bray will help you understand why Active Inference AI is the breakthrough we’ve been waiting for, and its potential to transform the future of society.
Ready to learn more? JOIN US at AIX Learning Lab Central where you will find a series of executive training and the only advanced certification program available in this field. You’ll also discover an amazing education repository known as the Resource Locker — an exhaustive collection of the latest research papers, articles, video interviews, and more, all focused on Active Inference AI and Spatial Web Technologies.
Membership is FREE, and if you join now, you’ll receive a special welcome code for 30% all courses and certifications.
The FREE global education hub where our community thrives!
Scale the learning experience beyond content and cut out the noise in our hyper-focused engaging environment to innovate with others around the world.
Become a paid member, and join us every month for Learning Lab LIVE!
Speaker 1 – 00:00 Foreign. Speaker 2 – 00:13 Hi everyone. Welcome back to the Spatial Web AI Podcast where we explore the front lines of the Internet’s next major evolution and the adaptive intelligence system shaping our future. Today’s conversation takes us deep into the near future of AI, one that, unlike today’s most popular AI systems, is explainable, sustainable and fundamentally human centered. We’re joined today by two brilliant minds. Dr. Mal Albarasan from Versys AI. She’s the head of Research, Strategy and Product Innovation at Versys and she’s also an expert in active inference AI. And Dr. David Bray, who is the Chair of the Accelerator and Distinguished Fellow at Stimson Center. He is CEO and Principal of Lead Do Adapt Ventures. Business Insider also named him one of top 24Americans who are changing the world under 40. Speaker 2 – 01:12 He has received both the Joint Civilian Service Commendation Award and the National Intelligence Exceptional Achievement Medal, including enrolls as a Senior National Intelligence Service Executive. Both of them are helping to architect this next era of computing where intelligent machines don’t just react, they truly understand. And together we’ll explore how distributed AI agents, digital twins and the Spatial Web protocol are converging to unlock real time coordination, context awareness, causal reasoning and global scale collaboration. This is a conversation you don’t want to miss Mal and David, welcome to our show. It’s so great to have you both here. Speaker 3 – 02:00 Thank you for having us. Speaker 1 – 02:01 Great to be here with you, Denise. Speaker 2 – 02:03 I’m so excited about today’s conversation. As many of you know, versus AI is at the forefront of pioneering active inference AI and spatial web technologies, revolutionizing how distributed intelligence can enable unprecedented collaboration between humans and machines while driving sustainability, transparency and explainability in AI systems. As we step into this next era of computing, the Spatial Web Protocol emerges as the foundational layer for an evolved Internet enabling digital twins of our physical reality. Combined with active inference AI, this breakthrough empowers intelligent systems to adapt dynamically, reason contextually, and interact seamlessly in real time across both digital and physical spaces. The purpose of today’s conversation is twofold. We’ll be exploring cutting edge use cases and implications of active inference in the spatial web. Then also we’ll be shedding some light on human centric AI and the potential societal impact of these technologies. Speaker 2 – 03:13 There’s a lot in store for you today. Let’s begin with one of the most important questions. Why does this matter now? This question is for both of you. This trajectory is an entirely new approach to AI and computing and it sounds amazing, but why should businesses, governments and the general public care? I’d like to Hear briefly from both of you. Why should we be paying attention to these breakthroughs? Mal, let’s start with you and then David, I’d like to hear your thoughts after. Speaker 3 – 03:48 Awesome, thank you. Well, first of all, we have a lot of work that recently came out that we can actually call a breakthrough. We got a lot of press attention. For example, the Axiom model, it was a form of AI architectures that was biomimetic. It was a kind of digital brain that learns the world through object causes and structure, not big data, brute force. So, so it learned rapidly. It was able to fulfill the benchmark of Game World 10K. It outperformed Google DeepMind Dreamer v3 by 60% and it used 97% less compute. So this is really a fundamental shift. It was a move away from massive neural networks towards intelligent, efficient systems that really resemble how humans learn, how they reason, how they generalize. It’s faster, it’s cheaper, it’s more adaptable, it’s built to be generalized and ultimately to resemble real world intelligence. Speaker 3 – 04:50 So I think the core of what we have to aim towards as a society is human like efficient learning, continual learning, without forgetting, distributed ecosystem, inspired intelligence, and ultimately edge compatible scalable AI such that the entire world can benefit from intelligence. Speaker 1 – 05:12 So I’ll build on the great insights that malice shared and say why businesses should care is if you’re in any environment in which the present and future is different than what happened in the past, generative AI is not really going to help you much. Of course, generative AI quite frankly is trained on, you know, past data. You know, we’re recording this on a day in which, you know, ChatGPT 5 came out and it can do very impressive things as long as that’s embodied in the data. But if you’re dealing with novelty, and I would submit we’re in a world in which there’s a whole lot of novelty going on and probably will continue to be a whole lot of novelty. You’re going to need something different. Speaker 1 – 05:51 And so what I try to do when I talk to either whether it’s companies or countries or just communities, is recognize we wouldn’t try to use a hammer for every solution. We would try to step back and say, does this need a screwdriver? Does this need a wrench, does this need a hammer? And where active inference and sort of the examples that Mal gave of what versus is doing are so essential is if you’re dealing with something where there’s newness, there’s novelty, there’s surprise, and you want to Actually be ahead of that surprise you need to reach. Instead of a hammer, you want to reach for that wrench that can actually deal with novelty. And I think that’s where helping boards understand that, helping companies understand that it’s really a collected set of tools. That’s the real value as to why this is important. Speaker 2 – 06:36 Yes, definitely. Indeed. So let’s talk then about some of the properties of active inference and this distributed collective intelligence idea. Particularly, let’s explore federated active inference and distributed AI agents. So, Mao, could you briefly explain how active inference differs from traditional machine learning, especially in terms of collective decision making, and then maybe describe how federated active inference works to enable AI agents to coordinate in dynamic environments like drone swarms during a storm or instances like that? Speaker 3 – 07:17 Yeah, absolutely. So the first part of the question was, how does active inference differ from traditional machine learning? And we have to understand that traditional machine learning generally works with gradient descent. So what it’s meant to do is it takes a bunch of variables, it has a bunch of dimensions, and then it tries to do its best to find the areas in the space that’s created by all of these dimensions that minimizes a form of loss. So that is the best at predicting an outcome. So to David’s point, what it really does is just try to predict patterns in existing data sets that it has had access to. It doesn’t understand the underlying structure, the causal reality that made its so that these patterns emerged. So if you get anything outside of the distribution, you will effectively not be able to contend with it. Speaker 3 – 08:14 Also, active inference has more components to it. It is meant to be based on agents, which means it has an embodiment intrinsic to the agents themselves. That’s not the case for most agent technologies that we see right now built on LLMs, where they’re really only agents and names. And by agent, they mean they’re intended to do something, but that’s not what an agent is. An agent is something that is able to take inputs, reflect on what causes the inputs, reflect on its own position within that environment, and then act in the world in a way that it expects to bring out some. Some given outcome. And that’s what active inference does. On top of all that, in differentiation with, say, reinforcement learning, active InReference intrinsically has two drives. Speaker 3 – 09:05 It has the drive to bring forth the kind of outcomes it wants to see, its preferences, but also to gain information about the environment it wants to learn more. So we don’t have to give it, say, a reward function and to pretend like we understand how to boil down ethics to one measure. We just have to put it in an environment. And so the relationship to federated inference that you were asking earlier is we have to ask ourselves, first of all, what’s the problem we’re solving with that? So think of it in terms of, in concrete terms, how can say a network of agents learn and make decisions together without a central brain? And so the question is like, why would you not want a central brain? Well, for example, in a centralized node you can then be vulnerable to the signal being imperfect. Speaker 3 – 09:59 And so you don’t have the benefits of collective intelligence. Consider for example, having a bunch of drones trying to navigate an environment. If the central node experiences communication issues such as disrupted signals due to environmental interference or complete failure, the entire network is compromised. The entire drone fleet becomes ineffective. So imagine you’re putting them to do rescue operations. Well then the rescue operation will stall and you’ll result in the loss of human lives. So the solution is to use something like federated active inference, where each node or each agent runs its own model of the world, but also is able to share key beliefs with others. And that’s critical because right now most machine learning models can’t communicate with each other. At best they can pass some degree of observations, but that doesn’t say that they will map them to the same space. Speaker 3 – 11:01 So this allows a group to collectively reduce uncertainty and predict each other’s actions. A little bit like meerkats keeping lookout and communicating about predators. So recently we published Econets, which we presented it in London and that was an example of that. It was an example of multiple agents coordinating to fulfill a common goal whilst exchanging information. And the goal in this case was to reduce waste and reduce energy. Energy expenditure. Speaker 2 – 11:32 Wow. You know, and it’s interesting too because when I think of one of the papers that you guys released probably a year or so ago on shared pretensions, it seems like there’s also this extended part of that collective interaction where they’re able to anticipate what the other agent is going to do or is going, how they’re going to react. Like they become this cohesive kind of organism of intelligence that’s working with each and coordinating and kind of co regulating together. So maybe you could touch on that a little bit as well. Speaker 3 – 12:09 Sure. So shared pretension is a term that we coined with Maxwell Ramstad and Toby Sinclair Smith and it meant to be a relationship between active inference, category theory and Haserlian phenomenology. All these big words to say, how do you deal with the fact that if you have multiple different agents that are embedded in the world, they necessarily, by definition, will have a different perspective of the world. And if they have a different perspective of the world, they probably have a slightly different model. So under those circumstances, how is communication even possible? So shared protention is a way to formalize the way that we can connect all of our different ways to see the world, such that we can coordinate around sometimes a shared language, a shared objective, and eventually a shared world where we harmoniously coexist. Speaker 2 – 13:08 Yeah, you know, and it reminds me of like how, you know, a basketball team can play and the players, they start to really understand each other to where they can anticipate where that player is going to be when you’re throwing them the ball or you’re kind of. And I know practice is part of that, but it’s also just this intuitive understanding of how we form our world model and our understanding that they are as well, and they’ll react in similar ways. So it’s kind of interesting. So, David, then let’s pass this off to you then. From a human machine collaboration perspective, why is distributed collective intelligence important for global challenges like disaster management or smart city governance? And then how can these approaches help improve transparency and trust in automated decision making, like particularly in critical public sectors and things like that? Speaker 1 – 14:10 So one thing I’m going to maybe going to go one step back before we jump there, because I think boards and companies are still just trying to wrap their heads around what generative AI is. And so I think Mal gave a very good sort of technical overview. But I would even go a little bit further to say what is collective intelligence? Because that’s actually a field that’s been around for at least more than 20, 25 years. In fact, I did my postdoc with Tom Malone at MIT center for Collective Intelligence, which is really looking at how do humans plus machines make better decisions together than they could by themselves. And we see this in biology with bee colonies, for example. Speaker 1 – 14:48 With a bee colony, you have many different task within there, including the ones that go out and they’re all looking for the different flowers. And when one finds a flower, they come back and they do the little dance that communicates to the other bees as to where it is and off they go. And that’s very similar to what we’re seeing here now happening with true agents. Now, part of what Mao was very graceful about saying, but I don’t want to gloss over, was the current fad of agentic AI is probably better phrased as agentish AI. Because it’s not really agents, it’s trying to be, but it’s still tethered to a platform. And in that while you can do things like Rags and things like that, where you additional data after the model has been computed. In generative AI, the model has already been computed. Speaker 1 – 15:35 It’s not always updating, it’s not continuously learning. And that’s because it’s very expensive. When you do a centralized system, it’s very time consuming and you have to tweak everything and get it just right. And that’s why we saw, you know, again, GPT5 is out, but it wasn’t like, you know, it’s not like you’re going to have GPT6 tomorrow. It’s going to take a while and then that’ll show up again. So, you know, the benefits of generative AI is that it is one, it’s almost one ring to rule them all. The weakness is it’s very time intensive, very energy intensive, and it is sort of locked in place. Once the model has been done again, you can upload information and knowledge and try to bolt things on, but it is not continuously learning. Speaker 1 – 16:13 So to Mao’s point, I think it helps companies to realize that there is a possibly different vision of the future versus what they have currently seen. And by the way, pun not intended in terms of versus what they’ve already seen, but in terms of it’s not one system where you go to and you ask a single question. It is going to be a million, if not a billion agents all trying to optimize for different functions. It could be one’s trying to optimize for your calendar, another one’s trying to optimize for my calendar. Another one’s trying to optimize for ships going in out of the Suez Canal, Another one’s trying to look at what’s happening in the Atlantic and things like that, and then they start to share their worldviews. So let’s get back to collective intelligence. Speaker 1 – 16:53 Well, it’s been shown over the last 20 years empirically that actually when you have a combination of experts, human experts, plus what we call naive participants, people that don’t know anything, they actually outperform decision making than experts alone. That’s kind of an interesting thing. And it’s been shown time and time again. NFL trying to guess who the NFL is going to draft. If you get people who are experts on the NFL and you get people who are experts on the NFL, plus people who have no idea how to play football, the people that don’t know how to play football. Plus the experts actually collectively outperform. Speaker 1 – 17:27 And we can think of this again also, when you look at jelly beans, like when you have a jar of jelly beans and you’re trying to guess how many are in that jar, actually you don’t want to have just one person guess. You want to have a range of people to guess and you’ll actually get a better answer. But that then gets to what Mal was also saying, which is they have to have a shared goal. And this is actually the premise behind citizen juries. You know, we don’t have one person make a decision about guilt or not unless, you know, if it’s a jury, you have nine people. Same thing within representative societies where you actually have sort of elections. Speaker 1 – 17:55 The idea is, again, because we all have different perspectives, but we still have the shared goal of, you know, continuing as a society that will lead to better outcomes. So there are plenty of analogs in human society. And now what we’re talking about is making it so it’s not just humans that are actually contributing to the collective intelligence now. It’s an assortment of 10, if not 100, if not a thousand, if not a million different agents also contributing. And so that then gets to where you were asking Denise, which is what’s the human centered part of this? And it’s really that unlike the current thinking we hear coming out of certain parts of Silicon valley that say AI is going to replace 50% of jobs and then they stop at that. Yes, no doubt it is going to change the nature of work. Speaker 1 – 18:35 When cars came around, sorry, the people that used to drive horse driven carriages, your job was changed, but we needed then car mechanics, we needed drivers, we needed things like that. What I really wish would happen is when people talk about, yes, it’s going to change the nature of work, but we’re still going to need humans in the loop. And in fact we’re actually going to be smarter when it’s humans plus machines versus just one machine, which is the current centralized platform model being sold. Or even if it’s a million, that you actually want humans plus machines because you’ll be collectively smarter together. Speaker 1 – 19:06 And that’s again where I think that, that, that idea and that premise has currently been lost or drowned out because there’s a whole lot of marketing trying to sell you that centralized AI will be that one ring that rules them all. Speaker 2 – 19:19 Yeah, no, and that’s such a great point because you know, collective intelligence is, you know, that’s how knowledge grows. It doesn’t grow from one brain. It’s limited with one brain and being able to kind of push back off of each other and challenge each other in our thoughts is how we grow. So you know, it’s excellent points. So let’s move on to Hyperspace modeling. Spatial Web Hyperspace modeling language, Star wars hyperspace. Yes, obviously this is one of my favorite topics right now. Speaker 4 – 19:59 Before moving on, I’d like to pause for a moment to let you know about an education opportunity that is available and waiting just for you. I invite you to join our growing community at aix’s Learning Lab Central, our premier global education hub centered around active inference, AI and the spatial web. This is a dedicated space fostering an environment for learning, community and collaboration around this emerging field. The convergence of technologies utilizing these new tools. Membership is free and it gives you direct access to others in the community and plenty of free resources to help you on your learning journey. We also offer training and certification programs to future proof your career and set you apart from your peers. We just launched our free energy collection of AIX official merch, including my favorite some Fristen inspired comfort. Speaker 4 – 20:59 Let this Markov blanket be the boundary between you and everything else. The sale of these items will empower us to offer scholarships for our courses and certification programs. The Internet is expanding into spatial domains and it’s powered by this new distributed and adaptive intelligence layer that will undoubtedly affect every industry across the globe and it will touch every one of us in our daily lives. I encourage you to come learn with us and find out how you can prepare for this next major shift in technology and computing. Join us at Learning Lab Central. All right, now back to the episode. Speaker 2 – 21:41 So let’s. Let’s talk about hyperspace modeling and data aggregation. Mal what is hyperspace modeling and how does it ensure context preservation and semantic interoperability as information moves across hierarchies and nested systems, for instance field robots to global decision makers? And then how does hyperspace modeling relate to the spatial web and its potential to revolutionize humanitarian aid or maybe even global health scenarios, things like that. Speaker 3 – 22:15 Yeah, absolutely. So first we have to credit Gabriel Rene and Dan Mapes for this Sci Fi concept, but then I think the whole spatial web foundation and versus for actually bringing it to life because Sci Fi is only a few engineers away. So let’s think of. Let’s think of a big challenge. In AI, you want to integrate data from the ground level up to higher level decision makers without losing critical context. Right? Because everything is context. All knowledge is making sure that your information has a perspective around that you can make use of for an action. But when you acquire data, you usually can’t just give it all to the person above you in the hierarchy. That would be pointless. They would not be able to do anything with that. They need to take some ideas away from an aggregation. Speaker 3 – 23:22 And so typically raw detailed data collected locally can’t be fully transmitted upwards due to constraints such as bandwidth, storage, cognitive overload. I can’t remember 1000 data points, and neither should I have to and neither should you. And so data is summarized or coarse grained when it’s passed up the hierarchy, which results in a loss of critical contextual nuances. And without detailed context, higher level decision makers risk, for example, misunderstanding root causes, ultimately leading to ineffective or misguided interventions. And they may clearly see surface level issues, but lack clarity on their underlying drivers. It’s what were saying earlier about this notion of causality versus simple machine learning. And so we’re leveraging the spatial web’s concept of hyperspace modeling. So we’re encoding data with semantic meaning, spatial context, and that’s perhaps something that’s profoundly important. Speaker 3 – 24:25 And temporal information, because all of reality is mostly spatial temporal, you have constant relationship between things that are embedded within the physics of reality. And so having that information about things gives you this sort of bottom layer relational information. And then there’s other layers that come on top of it. So when we say hyperspace, we don’t just mean spatial temporal in the sense that most people understand. We also mean the other kinds of lay maps of meaning that sort of become overlaid because of the patterns that exist as a function of those physical relationships. And so in practice, what this means is that information could move up the hierarchy to keep underlying details and keeping them accessible for the higher level. So imagine, for example, the humanitarian aid scenario. Speaker 3 – 25:30 There’s field teams and they collect data on resource distribution in villages, and then it’s aggregated at a regional command and eventually makes all its way up to the UN level. And so using a semantic hyperspace model, each report carries tags for what is happening, where, when, whether it’s aligned to a common schema. And so as a result, the higher ups get a summarized view that can still be drilled down. There’s no important detail that’s truly lost in translation even as you zoom out. This actually links to some interesting research that Carl Friston published recently, the renormalizing generative models, where you were able to basically apply active inference structure learning at any layer of the hierarchy using only one simple process. So it’s scale free. Speaker 3 – 26:23 And so we can use the same thing with the hyperspace modeling language, where we systematize the way that we pass the information, but we also keep crucial contextual information. And so the higher ups here will be able to make decisions with full context because the data is interoperable and richly described rather than just averaged away, so we don’t lose that much information. Speaker 2 – 26:54 Very interesting. And so let me ask you then, so when you’re considering a complex system, whether it’s like a humanitarian aid effort where there’s a lot of different players and then there’s, you know, a lot of different levels of operation or even like a manufacturing system or anything like that. Now in this multi agent scenario, are there agents at every point, right? You know, the overseeing agents as well as the lower level agents, or are there just agents throughout the system that are all just sharing the information? And so they have this global awareness, they have this simultaneous awareness of what it looks like at every scale of the operation. Speaker 3 – 27:43 So the beauty of this question is that there isn’t one answer. This is an architectural design question. Part of what you might want to do is have orchestrating agents. So that gives you a form of a governance structure. That orchestrating agent will to some extent want to have more aggregated information and not necessarily pay attention to what’s below, but the agents below still pay attention to what’s below, underlying, etc. Now you may also simply want to have agents that are sort of horizontally equivalent, but can still exchange relevant parts of information that speak to their local area of knowledge. So it really depends on the ways that you want to aggregate and architect your data and ultimately the kinds of permission structure you want to given agents. Speaker 3 – 28:35 So for example, in the IEEE specs, which HSML is part of, we have a sense of how information can be given out to different agents, given different kinds of permission structures and authentication structures. So it really depends on what is best for the problem in question. Speaker 2 – 28:57 Very, very good. Very cool. Okay, well David, so then what do you see as some significant implications for policymakers when adopting technologies like hyperspace modeling for informed decision making on a global scale? Speaker 1 – 29:15 So I’m going to be a little bit cheeky and say the first thing the policymakers are going to say, don’t ever use jargon. So the whole name needs to be de Jargonized. So the way I sort of, when I approach policymakers about this is I say, you know, just like how with HTTP Hypertext transport protocol and HTML Hypertext markup language, which allowed us to then have the decentralized web that was created. Thanks to Tim Berners Lee, we are really looking at now we can have a decentralized approach to making sense of the world. And that matters because up until now, what we have seen with generative AI is multidimensional pattern matching of language in some cases, or of images or most recently of videos. But that is not sense making of physical space. Speaker 1 – 30:13 And if you look at what humans do from a very early on, like the moment an infant is born, one of the first things they’re doing with their eyes is they are trying to make sense of your eyes and your smile that they’re seeing. And they’re beginning to try and do edge detection. And then from there they begin to make sense of that’s a face, there’s this environment and by the time that infant is now 2 or 3. The analogy I give is if any of you have ever interacted with a two or three year old and some of us have had them, you know, as kids, if you give them an object and they’re in their high chair, I guarantee you they’re going to probably try and drop it. You can give them another object. Speaker 1 – 30:48 They try and drop it now by dropping attempt number five or six. What’s going on in their brain and this is where it’s very similar to active inference is they have inferred that when I have an object and I let go of it falls. And that’s making sense of space and time relationships. It’s in my hand, I let it go, it falls. I don’t know why, it just does. Now, generative AI would not only need 5 attempts, it would probably need 500,000 attempts of you show the data over and over again and then it’s always doing the same pattern matching which is object let go fall. It didn’t actually infer anything about the causality to Mel’s point. Now let’s give the two year old a helium balloon. They let it go. Generally I would just go, I’m broken. Never seen that before. Speaker 1 – 31:33 Can’t tell you anything about it. You know, that’s where you’re going to get an interesting hallucination. Whereas active inference will. Now because part of the what Mel was saying earlier about how it’s trying to minimize surprise, it’s trying to be curious, but minimize surprise because surprise is costly. I now have to spend energy to try and actually devote resources to recalibrating my mental model. But now this has been a significant Enough surprise that the object that I thought when I let go would fall because it’s risen. I’m now going to commit energy to actually say there’s a new class of object. I don’t know why that rises. And so this matters because there are plenty of things that happen in the real world. Speaker 1 – 32:11 Ships get stuck in the Suez Canal, all of a sudden, now there’s a ripple effect which in less than 48 hours there’s not going to be enough containers available in LA because that ship got stuck. And then when those containers are not available in LA, 48 hours later, there’s suddenly a price spike in futures markets because now people are building containers and they’re all trying to rush to get metals. And now the cost of shipping for the next nine months is going to be more expensive. And who could have predicted that for the next nine months you could get plenty of Rolls Royces shipped to the United States, but not quart sized cans of paint, because the margins on that quart size can of paint are too low. That’s not answerable by any of the other approaches to AI that we have here. Speaker 1 – 32:52 But if we have a million of these agents, each trying to locally optimize for what does normalcy look like in the Suez Canal? What does normalcy look like in futures markets for metals? What does normalcy look like in terms of the cost of shipping, that sort of thing, that they could begin to have a conversation using the hyperspatial protocols that are present here and begin to make sense of that sort of linkage and say, just a heads up, you’re going to find it really hard to find quart sized cans of paints for the next nine months because that ship got stuck in the Suez Canal. That’s where actual insights, that’s where actual intelligence is coming from. I raise that because I think for a lot of policymakers, and I’ve had some very refreshing conversations, by the way, over the last six months. Speaker 1 – 33:33 If you look at what the US has written in policy, and it’s not just the us, you can look at the EUA Act. One, for the last four or so years, multiple countries, multiple regions of the world have been trying to write one monolithic AI act, which is, oh, one, it’s not going to really be effective, and two, it’s all context, context. It’s contextual in health, it’s contextual in commerce, it’s contextual in finance. And I also have conversations, not even on a policy level, but I have conversations with businesses, including CEOs and CISOs that say the death of cybersecurity is the moment we introduce anything that’s complicated into our environment. Environment. We know of nothing more complicated than generative AI, yet at the same time, it’s not like we can’t currently at the moment do that. Speaker 1 – 34:21 You know, no cio, no CISO is going to go to their board and say we’re not going to do that because of the marketing. But they are concerned that they’re going to be unleashing a whole lot of insider threat as a result. Well, you can contain some of that risk, you can mitigate some of that risk if you bound the pre compute agents, which again, active inference allows you to do things pre compute and say at no point in time should your finance system ever talk to the outside and transfer funds. Which as we know, in November there was a contest and in less than 490 attempts with a generative AI system, even though the machine was told not to transfer funds, guess what? It did. Speaker 1 – 34:56 So I raise that because I think it solves both the larger country level, state level, local level policy questions about how you can have confidence and contextual approaches to AI laws and policies, but it also solves the more immediate business and commerce problems of. Yep, we know we’re going to have to do AI, but are we introducing a broader insider threat with that AI because we can’t bound it or can we have confidence because we put in rules and actually restricted pre compute what those true AI agents, not the agentus agents, but the true AI agents can do? Speaker 2 – 35:30 Yeah, no, and the security is such a huge factor of the protocol. And it’s funny because you gave that example and I saw something recently that was, it was called the dead grandmother hack or something and the input was, you know. Did you know my grandmother died? No. I’m sorry. You know, I, I hope you’re okay. Well, she used to read me stories and the, my favorite story was she would read me the Microsoft key for whatever, whatever, you know, and I, I really need, I really miss my grandmother. I really would like to, you know, okay, let’s reenact this for you. Calm your head, think of your grandmother, blah, blah. And you know, XY’s five, you know, and it’s just hilarious how you can trick these algorithms into giving exactly the information you want. Speaker 2 – 36:36 And so you’re right, we need a system that is way more protective than the system we’re using right now, which is the World Wide Web. We’re using all of these technologies in. And the spatial web protocol does have this huge amount of security baked into the protocol system itself and then being able to take our human laws and guidelines and make them programmable so these agents understand, abide in real time as they’re adapting to new situations, as they’re sharing information between each other. I mean, that’s just a hugely necessary part of this evolving puzzle of this emerging technology. And so let me ask you too, David, how do you see these technologies and addressing data transparency and accountability and bias reduction in global institutions and governance? Speaker 1 – 37:32 There’s multiple, there’s multiple questions within that question. So I’ll try to do very briefly so data transparency. So there are going to be times when you cannot make all the data you’re using transparent, partly because it may include private information, it may include corporate intellectual property or things like that, but you can be transparent about the rule sets applied to that data, which right now for generative AI, we can’t. You know, the example you gave of the social engineering of an AI agent to extract sympathy and at the same time get a key part of the trouble is, and again, I don’t want to, you know, generative AI has its place, but the challenge with generative AI is it’s not pre compute like again, it’s already been computed in this multidimensional space, the model’s already there. Speaker 1 – 38:16 And so any filtering happens when you’re trying to filter a response and say that’s not socially acceptable to give that answer, or it’s not appropriate to give that answer, and that’s something that’s written by a human. And so this allows you to say, look, I can’t necessarily give you all the data because it’s got personal information or it’s got intellectual property. But I can say, here’s the rule sets I’m applying so you can make sure I’m not discriminating based on age, I’m not discriminating on zip code, I’m not discriminating on demographics. So bias in data, well, unfortunately everything is biased. But what you can do is you can again be transparent about the filter and you can also be descriptive about the types of data sets brought and whether or not it is representative of the very questions you’re trying to ask. Speaker 1 – 39:01 I try to caution people when they say we want unbiased data. I’m like, you realize that by removing bias, you’ve introduced a bias, and so it actually becomes very circular very fast. And so you’re not necessarily striving for Removing bias. But what you’re trying to do is again back to collective intelligence, which is if you ask one person to guess how many jelly beans are in a jar, they may be off wildly, but if you get enough data, then you can do that. And then you can also put in the rules and say if the data is not representative enough to ask these questions about demographics or ask these questions about zip codes, or these questions aren’t appropriate, then raise a concern for a human to say, this isn’t something I should be asking of the machine. Speaker 1 – 39:44 So I think it is a, it’s not, say, an easy answer, but it is recognizing the difference between things that you try to do pre compute, that you can share versus things you do post compute. Speaker 2 – 39:56 Yeah. And you know, I think it’s important to remember that, you know, as humans, the individual perspectives are hugely important to this overall diversity within knowledge. Speaker 3 – 40:10 Right. Speaker 2 – 40:10 And that contributes to the growth and evolution of knowledge. And so it comes back to that kind of we challenge each other and then we come up with these consensuses of right and wrong while still preserving all of the differences among us. You know, and that’s why you have juries. Speaker 1 – 40:30 I mean, that’s why you have nine people, is they’re all arguing about out guilty or not. And so you don’t want to have a singular person because any one person might actually be biased. Speaker 2 – 40:39 Right, right. No, and I think that’s one of the most beautiful things about the way the spatial web protocol works is that it preserves the diversity. And I think it’s the only way we can move forward with technology because there’s no way we will all agree globally, we it’s take five people in a room and get them to all agree. Or like you were just saying with a jury. And that’s so hard. That’s, you know, it’s sometimes an impossible task. So, so I’m curious about social norms and this human centric AI mal you were describing to me recently about work you’re doing regarding robots learning social norms to integrate seamlessly with human activities. And so can you explain how AI agents learn and infer social norms through active inference to become more socially intelligent and context aware? Speaker 2 – 41:34 And what kind of feedback loop is used in your research to ensure these agents adapt dynamically to real world social environments? Speaker 3 – 41:43 Yeah, absolutely. So this is part of the research I also did for my PhD, but it’s part of the research that Tim Verbalen is leading at versus. The first question we have to ask ourselves is like, why do we care about social norms. And so think of this scenario. We finally made robots wildly accessible for everyone. That’s what Elon Musk wanted to do. Everyone has a robot in their living room. And you tell the robot, just keep the house clean, be a good Roomba and keep the house clean. But you don’t want to hard code every single little thing in your system because that’s infeasible. You’re not even thinking of that in your day to day. These are sort of unwritten rules for you. That’s why they’re a norm. Speaker 3 – 42:26 And so your robot looks at your house, looks at the best time where things are moving the least, and it’s like, well, that’s 3am, but 3am it’ll pick up the vacuum and start vacuuming violently. And you’re like, what the hell? And that’s a social norm, right? You wouldn’t have thought, because none of us do that. We don’t vacuum at 3am, or at least most of us don’t, especially if we live with a family. So what we do is we embed a system within an environment and then it looks at the environment and understands the relationships within that environment. Speaker 3 – 43:08 So for example, place your robot within the household and make it observe the household routines and cues so the quiet hours or the presence of people and the robot can infer a sort of unwritten rule that there’s no vacuuming during the night or while someone is focused on work or tv. So for example, if my Roomba started right now, be very unpleasant. And so learning these social scripts is crucial. This is a concept that comes from social sciences and small shout out to social scientists out there. More of the AI world needs to bring insights from social science, especially as we become more and more embedded with AI. And so real user feedback has shown that robots can fail when they ignore social contexts. Speaker 3 – 43:58 Our work involves therefore training these assistants to pick up on cues and adapt so they seamlessly integrate within our lives. They have the ability to segment the world into what seems to be the cause for the pattern I’m observing right now. And because if all you do is observe a pattern, then you might infer an incorrect rule. You might infer there’s silence and there’s no silence. So I’ll. Or there’s presence and there’s no presence. And optimization would have me act when there’s no presence. So for example, if I’m thinking of a robot that’s thinking, well, they’re trying to optimize for level of noise over Time and level of activity over time, then they might pick an average and be like, well, there’s more noise then, so I’ll do a little bit more then. And on average your house will be less noisy. Speaker 3 – 44:58 As opposed to the cause of noise is because humans need to sleep. And so one of the things we’re doing right now in the lab is tackling the habitat benchmark. So Tim Verbalen and his team are making robots do human tasks like picking up the dishes or putting the groceries away. And so that’s a purely robotics benchmark. But eventually they’re adding in layers of hierarchical thinking. So you don’t just have the lower level, here are the pixels, here’s what I have to do. You have an understanding of how your joints work and what you can map to the pixels. And then the robot figures out how to navigate through that. And then above that, what you might have is this corresponds to this kind of task and this corresponds to this kind of request. Speaker 3 – 45:49 And then over that you have, here’s the ecosystem I’m a part of. This task fulfills this purpose because these people need this kind of allostatic state. That is the house changes according to certain temporal patterns and I have to adapt to the temporal patterns. So that’s where our research is going and we’re really excited to see what comes out in the next few months. Speaker 2 – 46:15 That’s awesome. And it’s funny because to me, this plays into why active inference is best suited for mission critical or high stakes environments or systems. And you know, your example with the sleeping in the middle of the night, you know, high stakes is subjective because getting your sleep is really important. So it’s not just talking about like mission critical, like hospitals or airports or different things like that. It’s just what is critical to you as an individual in your life. And this type of AI system is what is going to enable the adaptability around that, to optimize your experience, you know. So David, how do you envision socially intelligent AI impacting daily life in homes, workplaces, public spaces? And from a governance standpoint, how important is it that robots and AI systems adhere to socially accepted norms and ethical standards? Speaker 1 – 47:21 Well, so a lot to give praise for what Mao and her colleagues are doing. In a sense that this is another level of intelligence that oftentimes is not encoded explicitly in digital data. A lot of it is, can only be observed through patterns in the physical world. For example, if you did have a robot, and that robot, you know, let’s say it actually had human like appendages including legs. And it sat down in the United States, we would be for the most part okay if we saw the undersoles of that robot’s feet. But if you’re in the Middle east and that happens, oh boy, you just committed a huge foul. Because social mores are contextual. And in the Middle east you do not show the bottom of your foot. That’s incredibly rude. And so things like that, it’s understanding. Speaker 1 – 48:08 And again, that’s not something necessarily that you will do the rule based approach for. But you also won’t necessarily be able to feed in data from the web, scrape from the web and have the robot infer that I should not show the bottom of my feet. I’ll give another example. I mean, you know, with little kids, if any, again, if any of us have had little kids, I will remain, I will keep nameless the kid that this happened. But there was once a case where a child was in kindergarten and apparently got in trouble because they were doing an alliteration project, which was. It was. They were doing. It was the letter P. And so they did pink pigs, which is great. Speaker 1 – 48:47 Now again, these are five year olds, mind you, but one of the little kids decides to do additional iteration and does a pink pig peeing, which, you know, we might laugh at. But you also know that at a certain point in time that’s not appropriate. But nobody has told the social war of thou shall not document bodily functions. You know, that’s not something you do place and time. I’m just glad they didn’t do pink pig pooping. But anyway, so. But I raise that because these are things that are additional layers of intelligence that are contextual, are temporal. And it also gets to the. You know, as much as I appreciate the conversations that people have about AI ethics and ethics is ma is important. I try to remind people that remember, ethics are actually socially defined. Speaker 1 – 49:33 We can all think of cases where in the 1700s there were things that people thought were ethical that nowadays we look back and say, no, we don’t think that’s ethical. Even in the 1800s, go to the 1900s in World War I, the British thought that submarines were unethical because they were underneath the water. Yet Cuboats, which were military boats disguised as civilian boats were okay. And then there was Lusitania that got sunk. And all of a sudden they’re like, maybe we shouldn’t do military boats that are disguised as civilian boats, because that’s not really good. So come World War II, that had flipped that the ethics were now U boats, submarines, got to do them because they’re being done to us. But Cubo so much. Speaker 1 – 50:14 And so, you know, as much as I appreciate the conversations about AI ethics, one of the things I try to remind people is what things are we doing now that we think are ethical, that 100 years from now they’re going to look back at us and say, what were they thinking? And so by approaching it more as a question of how do you allow the system to detect social norms and adhere to social norms, you will create a system that actually, again, as you were just saying, Denise, there’s not going to be one rule set for the entire world. I mean, there may not even be one rule set for the entire country because different parts of the country or things like that will have different norms. Speaker 1 – 50:47 But if you can understand that, I mean, even in the small final example that Mal gave, yes, for the most part, you do not want the machine running the vacuuming at 3am unless it happens to be a medical professional that works the night shift. And so again, you know, that’s understanding that running might have that nurse or doctor go. Speaker 2 – 51:11 Oh, it’s so true. Wow. So let’s move on then to another big topic, explainability. So with explainability, transparency and trust in AI, I think we can all agree how important it is for building trust with a user, right? So, Mal, why is explainability crucial in the context of AI systems like home robots making decisions such as adjusting your heating systems or cleaning schedules? Or how does active inference, how does your active inference approach inherently build explainability into the AI decision making process? Speaker 3 – 51:51 So when AI systems make mistakes, and they will, everything makes mistakes. It’s important that we understand why in order to fix them and trust the system moving forward. So it’s a little bit like children. If you understand why a child did what it did, you don’t need to be aggressive with them. You can change the pathway that led the child to make that decision and you can grow a trustworthy relationship or a trusting relationship with them. And so it’s critical for anything important like medicine, logistics, finance, industry. It’s also important because this way we can do counterfactual reasoning, right? Speaker 3 – 52:36 We can, we can do simulations of how you’re going to behave and try to understand why you behave the way that you did, such that when you do act in reality, we’ve got a good sense of what you’re going to do and why and we can craft your environment to account for these kinds of things. So this is super critical because you want to prevent errors you want to do the before, you want to trust the system and catch errors as they happen, so during. And you also want to understand why an error happened such that you could potentially legally explain it. So that’s the after. And so when we understand how AI system make decisions, it helps us anticipate potential points of failure, it helps us allow preventative measures to be put in place before real world errors occur. Speaker 3 – 53:35 And so we can also have immediate real time insights into the decision making of the AI to show that operators or users can quickly identify, intervene and correct errors as they emerge and thus reinforce trust and operational resilience. Right, to catch an error before it causes a mass cascade event. And so yeah, for legal accountability you want to be able to say, yes, this seems terrible, but the alternative was way worse. So think of a situation where for example, you’re in the street and you have a self driving car. The car is going to run into someone because it’s going to have to swerve. It doesn’t matter where it swerves, it will hurt someone. So how do you explain the decision? Why did it swerve a specific direction or another? Speaker 3 – 54:24 If you don’t have this context, all you see is that the car hit someone and you will think that the AI made a mistake. But the truth is the AI didn’t make a mistake, it did the best decision it could given the circumstances. So we’re basically embedding explainability into our agents decision processes because we have the ability to couch the reasoning of the agent under a probabilistic system where you both understand the mapping between say an observation and a state. So what does the agency or perceive and what it understands of what it saw and perceived. You have the distribution over that perception, so how certain was it of this mapping? And you also have what the agent thinks is going to happen next, given the state of affairs, what it thinks it’s going to see next given the state of affairs. Speaker 3 – 55:22 And so giving this transparent reasoning builds trust and it allows for quick correction. So yeah, in the new spatial web standards we have improved explainability and transparency as a key benefit because there is interoperability and is similar semantic that can be interpretable by humans. So it’s not just that the AI’s distribution were understood and therefore explainable, it’s also that we use the common language so we didn’t have to do an extra layer of interpretation where some information might have been lost in translation. So yeah, again think of a home robot deciding to turn off heating because it thought everyone was on holiday, perhaps because there was unusual inactivity for a while, the owners should be able to ask, why did you do that? And get a clear answer and then be able to make the correct change to change the behavior in the future. Speaker 2 – 56:28 Right. It’s so a human machine kind of cooperation on learning for the machine to be able to learn, you know what, learn as the situations evolve. So, and I like the fact that you really approached not just the explainability, but kind of explainability as defensibility too. You know, I think that’s a really important point as well. So, David, can you speak to the ethical considerations of explainable AI, like, why is transparency vital for the broader adoption of AI in societal systems? Speaker 1 – 57:07 So, yeah, I’ll tackle specifically explainability, maybe less, because again, transparency, I would say there’s going to be times when you can’t be transparent about the data, but you can be transparent about the rule sets. There may even be times when you can’t be transparent about the rule sets because that then could be exploited if you knew all the rules that were being applied, say, for determining whether it was safe for you to board a plane or not. You know, if you knew that, then you might be able to get around it. Speaker 1 – 57:30 But, but I would say for explainability, it gets to the key point that in order to have trust, and I define trust as the willingness to be vulnerable to the actions of an actor you cannot directly control, there needs to be the perception of benevolence, the perception of competence, and the perception of integrity on behalf of the actor. And that can be a human actor that could be a company, that could be a government, or it could be a machine. And about a thousand years ago, we humans had this technological problem which was thanks to advances in technology, you can now live in and be born in a town or city that was different than one you would live in and be different than when you would die in. Speaker 1 – 58:10 And so when someone shows up and says, I’m a doctor, or someone shows up and says I’m a lawyer, how do you know that person that’s claiming they have the capabilities and the ability to be a doctor or a lawyer are truly what they say they are? And so about a thousand years ago or so, give or take, we evolved the idea of the professions, which was the professions, you had to show knowledge of something, have knowledge of it, but also have experience in it. And experience means including you’ve done it enough that you can actually have it where it can be reliable in terms of practicing law or Practicing medicine. Speaker 1 – 58:44 And then if anything ever goes wrong, let’s say you are a doctor and something happens and a patient, heaven forbid, has adverse effects or even passes away, then there is actually review by the profession and says, explain what you did at that moment. And then it’s evaluation. You know, did you do something wrong? Did you do malpractice? Or was it just a case that what you confronted when you’re trying to treat that patient, maybe you’re doing surgery, it was just a bad situation to begin with, and you did all that you could. And so this is actually not a new problem, it’s actually an old problem. And the way we solved it was to have that. That sort of explainability. Speaker 1 – 59:20 And we have that, you know, same thing for if you ever think that a lawyer’s done something wrong, then it’s actually the bar association will evaluate whether or not the lawyer actually did, or if they did their best to actually defend you or prosecute. And if they do determine that the lawyer did something wrong, then they could be censured and. Or have their license taken away. And so these are very consistent approaches to governance that we’ve already done for human systems that we can begin to apply to machine systems. And it’s necessary to have that trust. The one thing I will also say, I do think that with the IEEE Spatial Web Standards, in fact, we. Speaker 1 – 59:54 You know, Denise, you and I were on a digital conversation just this morning on LinkedIn where I think a lot of people aren’t quite understanding what they are yet. I actually think we need to get a better. Because they. They dismiss it as yet another standard. I’m like, well, yes, there is the xkcd factor of 14 standards, and someone says there must be another standard, and there’s 15, but this is a little bit different. This is more the foundational standard, much like the Web, where HTTP and HTML allowed us to have the diversity that we now know as the modern web. But that would be the one thing I would try to leave people with, is when we talk about using these standards, we’re not talking about one singular ring to rule them all. Speaker 1 – 01:00:31 If anything, we’re talking about more the protocols to put in place so that you can have the diversity and the decentralization. Speaker 2 – 01:00:38 Yeah, you know, that’s a great point, David. And it’s funny because, you know, I think a lot of people, like you said, they misunderstand because a couple of reasons, but one of them, I mean, you know, as soon as metaverses started getting talked about a lot, and it was all these companies coming out with, I have a new protocol, we’re the new Internet, we’re the next Internet, we’re the, you know, and all they were talking about was these closed metaverse, like game worlds, right, that couldn’t even, you couldn’t even jump in and out of with your assets. You couldn’t like you’re, you know, and somehow that’s going to be the new Internet. Speaker 2 – 01:01:12 And so I think people have been kind of misguided on these terms in different scenarios to where they’re just like, well that’s kind of pie in the sky, but that’s not going to really work out in reality. And to your point, this is literally the next foundational layer of the Internet. You know, we have tcp, ip, HTTP, HTML, and now hstp, hsml. You know, the foundation levels, they don’t take away anything from the previous levels. It’s just expanding the network capabilities, you know, and to be able to expand into spatial domains and kind of the digital twinning of everything and then empower it with this distributed intelligence. It’s just, I, I think you’re right. Speaker 2 – 01:01:58 There need, there’s a lot of work to be done in really getting people to understand the power in that and what that really means for the future of, you know, intelligent systems and computing as a whole. And, and I, it’s exciting to me because I see that not only are we, is it going to enable us to really do a lot, to solve a lot of these problems on a global scale, whether it’s climate issues, whether it’s, you know, all kinds of issues. Right. But also I feel like we’re gonna have, see this wave of innovation like we’ve never experienced before. And that’s really exciting to me as well. Speaker 2 – 01:02:38 So, you know, I know we still have a couple of things that I really want touch on and our time is kind of getting short here, so maybe we can kind of move through this next part kind of quickly. But I do want touch on theory of mind and multi agent collaboration. And in the field of AI, there has traditionally been a lot of difficulty in developing this rudimentary theory of mind for AI agents. And this is necessary for improved coordination and task execution. So, Mal, could you describe the research your team has been leading on how AI agents develop a rudimentary theory of mind and why this improves their collective performance? Speaker 3 – 01:03:19 Yeah, I mean, this is a question that touches on what we just talked about, but also our federated inference. Right. So with many interacting agents, each Performing a separate task, coordination becomes complex. And the more you do this and the more you scale that, the more complex each task becomes. And so you have to ensure that agents understand and align with each other’s goals, intentions and actions. But that’s really challenging, especially when the interactions are dynamic and real time. Human operators and collaborating agents must trust that tasks will be executed correctly without constant oversight to David’s point. And so you need to be able to reliably predict the actions and the behaviors from your AI or from or the AIs with each other. And so we are doing research that is drawing from something as simple as kids playing football. Speaker 3 – 01:04:20 And we notice how even young players intuitively anticipate where teammates will move rather than everyone swarming the ball. Right. And so each player has a basic theory of mind about the other. So, so we’re researching how agents can develop the ability to model the goals and the beliefs of other agents. And no single agents can micromanage everything at once. So the success here comes from an overarching plan where each role is assigned and each agent believes the others will play their part. And so we basically give them a common language for the agents to harmonize their activities across networks, environments. And so for now, our research focuses on agents having the same model. So again, remember what were saying about shared pretensions, we use the same robots with the same model, but they’re not in the same place and they have different perspectives. Speaker 3 – 01:05:21 So they’re going to hyper specialize towards a role. Now the future of this research is what happens when your robots are different or when for example, you do not have the actual software, you don’t know what the software is for another robot, you don’t know where their model is. You’re still going to be to have to predict what the other robot is doing. So you’re going to have to infer based on what you think is common between your functioning, how they’re likely to act and how they can coordinate with you. So yeah, this is going to be a really interesting avenue of research which we’re working on right now with Carl Friston. Speaker 3 – 01:06:02 And we’re working especially on the notion of empathy, because empathy touches on this ability to find a scale at which we have some commonality and then have a simulation of the other agents ability to deal with the world and an understanding of how the agents has a valence relative to that ability to navigate the world based on their perspective. And that’ll be a pretty big breakthrough when we do publish it. Speaker 2 – 01:06:31 Well, I Feel like that’s a topic for a whole nother episode. I’d love to explore that further too. So, David, how significant is Theory of Mind in terms of enhancing human trust and efficiency in this human machine collaboration of tasks? Speaker 1 – 01:06:49 So I don’t know if Mo knows this, but when I did my dissertation postdocs, it was at the altar of a person by the name of James G. March, who came from organizational science and did what was called exploration versus exploitation back in the 80s. And so exploration and exploitation, he was doing a sort of interesting experiment which was an organization of people faces the challenge. There is an external reality that’s got in different dimensions. There’s what the organization collectively believes to be true, but then there’s what the individuals in the organization also believe. And so the question is, should the organization exploit its existing knowledge of what it thinks reality to be, or should it explore? Speaker 1 – 01:07:36 And the challenge is, as we know, there’s plenty of examples of organizations in the past that they did a lot of exploring, but they didn’t exploit what they knew. Xerox PARC made lots of innovations, but failed to actually capitalize on it. You know, Bell Labs and things like that. There was lots of things then. Similarly, we also know that if all you do as a company is you exploit, you only use existing reality, then you will actually not innovate and stay ahead of the curve, especially in today’s changing times. And we can think of companies, Blockbuster, for example, it did not adapt to the changing reality. And so Kodak unfortunately did not change the existing reality. Speaker 1 – 01:08:09 And so it’s this interesting question of if there’s what you know as an organization and what you have as people, when do you actually take sort of bottoms up insights from people versus use with the organization? That’s what organizations are always tension. That’s the tension of what they want, you know you’re doing. And sometimes you get. Organizations are like, let’s just keep on doing what we’re doing. And there’s other ones where they’re like, we need innovators, we need to do things differently. But then it’s also just a question of how do you know that you as your organization are actually aligned to reality. You might have people saying, now’s a great time to buy real estate, when in fact it’s not. Or now’s a great time to sell the following stock, when in fact that’s the last thing you want to do. Speaker 1 – 01:08:47 And so it’s worth exploring that he also goes in later and does other things like that. But I raise that, because that’s actually as early as 1980s, that this has been a classic business problem. So it’s not just theory of the mind of people, it’s actually a theory in the mind of organizations. There’s a theory of the behavioral theory of firms and things like that. And so, and what, full disclosure, what I was trying to do, because I had come from bioterrorism prepared response and I’d seen what worked and what didn’t work, responding to the ANTHRAX events of 2001, in responding to the events of West Nile and severe respiratory syndrome, was if you add hierarchy, and I’m talking about actual organizational hierarchy, and that organizational hierarchy is top down. So they’re percolating insights down the hierarchy. Does that help or hinder? Speaker 1 – 01:09:27 If you’re in a changing environment, and the short version is, as you add tiers in the hierarchy in a changing environment and you’re top down, that’s death. Because you will have a view of the world and I guarantee you there’s less people at the top than there are on the edge. And you’ll actually fall out of favor if, however, you have a bottoms up hierarchy. And a bottoms up hierarchy, some would argue, is actually what a market is. A market is a bottoms up form of hierarchy. And things are percolated up for better fit. As it matches reality, you adapt better. And so the takeaway that I was trying to make at the case in 2005, 2006 was you’ve got to empower the edge. Speaker 1 – 01:09:59 And later I parachuted in Afghanistan and fortunately got to make the same case with Stanley McChrystal and others that you have to empower the edge. So what does theory of mind really mean for organizations and companies and countries? It will allow us to have a better way of empowering the edge and not be top down. And if anything, we may look back and say MBAs and masters of Public administration, where we’re teaching managers to tell people what to do, we’ll look at them and say, what were they thinking? What you really want to do is a good person is actually saying, here’s the vision, here’s the goal I’m aiming for, here’s the boundary condition. Explore the space, whether it’s human agents or human or AI agents, exploring that space together. Speaker 3 – 01:10:39 Other. Speaker 2 – 01:10:40 Yeah, you know, and that circles back to the. What were discussing earlier about the importance of diversity of perspectives. Speaker 1 – 01:10:48 Yes. Speaker 2 – 01:10:48 So. So we have two more topics that I want touch on. One of them sustainability and energy efficiency, because this is a huge topic where active inference really stands out. So Mao, how does active inference combined with the spatial web address, the energy intensive drawbacks of traditional deep learning models. And can you explain how your variational Bayes Gaussian splatting method contributes to more energy efficient continual learning in AI systems? Speaker 3 – 01:11:22 Right, so it’s a lot. Speaker 2 – 01:11:27 I’m asking these big questions, but I know our time is. Speaker 3 – 01:11:35 So let’s start with maybe explaining a bit what variational Gaussian splatting is. It’s basically it solves a big problem in 3D modeling, which is catastrophic. Forgetting it frames 3D scene representation via Gaussian mixtures. So basically you have a bunch of Gaussian functions, components, and they are a little bit different. And what you do is you mix them together and as you mix them, they create a new space, a new shape. It can fit to data a little better. These are called Gaussian mixtures. You use closed form Bayesian updates to integrate new data continuously. So you don’t use a replay buffer, which is extremely expensive and takes a lot of retraining. Speaker 3 – 01:12:34 And so what we did, or specifically what Tim Verbalen’s team did with versys, is that they were able to match state of the art performance on static data sets and they excelled at continual learning for streaming 3D or 2D data, which is ideal for real time robotics, AR VR, autonomous systems. And so it basically allows AI to learn on the fly without needing vast memory or compute. It allows us to use edge devices and dynamic real world environments. And so it makes 3D modeling, for example, or 3D generation, or more generally, simulation of your environment and understanding of your environment, efficient, scalable and realistic. And so why does that relate to sustainability? Well, active inference has multiple ties to sustainability. The first one is along the lines that we just discussed. The models learn better with less data, with less compute. Speaker 3 – 01:13:43 You don’t require the same kind of replay buffers, for example, to generate what you think you’re going to see next. You generate a hypothesis about the world and when you have uncertainty, you’ll go to where you think there is something uncertain. You’ll go gather information. So your relationship with the environment is a little different. It’s better than reinforcement learning. To David’s point, you don’t require 500,000 examples. So that’s one side of the relationship to sustainability. The AIs that we use will not need to burn through entire lake to function. The second aspect is that active inference is the science of sustainability. Literally it is the science of self sustaining systems that are able to navigate harmoniously with their environment. Speaker 3 – 01:14:32 And so we published at least three papers now, one on resilience, one on sustainability, and one on sustainable resource management, showing that active inference agents adapt to their environment and are able to plan long term and understand the consequences of their actions relative to this long term planning and are able to then figure out how to acquire the resources they need to stay alive. So for us, for example, but also to keep those resources from leading the entire ecosystem into a cascade because they understand the causal relationships between all the different elements. And so if you combine both of these technologies, so for example, variational based Gaussian splatting, which is the way that you can have robots operate on the edge, understand their environment, generate predictions about their environments, and then figure out where there areas of uncertainty. Speaker 3 – 01:15:30 Think of for example, a robot that has to do some agriculture, for example, and it has to figure out, okay, well, I think the environment is such and such. Now I have some uncertainty around here, so I could go and learn a bit, But I can also work here and I understand that this area has these dynamics over time and my actions will have these consequences over time. So perhaps what I’ll do is I’ll map my actions a little bit to acquire some things I need, and when I need some things to regenerate, maybe I’ll go learn about the rest of the environment, I’ll figure out what else needs to happen. Speaker 3 – 01:16:12 What are the other causal pathways that I may not have considered yet, such as my future actions have an even larger fitness landscape, so more avenues to do things in the world that are beneficial for myself and the other individuals with which I’m coordinating. Speaker 2 – 01:16:30 So let me ask you because, and I, I’ll make this quick, but I just saw something where Gemini, you know, Google Gemini, just launched their 3D world making, you know, technology application. And it was saying that one of the breakthroughs for them is that it retains like a full minute of memory as it’s, you know, you’re able to kind of, you know, build out these spaces. So how does your technique, your technology, how does that work with memory of like remembering what it’s learned or what it’s ha. What has happened up until that point? Because I know in your RGM paper it’s, you have these, this retained memory on a broad perspective that is carried through to that point that then enables this wider range of predictive inference moving forward from any given point or moment. Speaker 2 – 01:17:38 So you know, how you can explain it a lot better than me. Speaker 3 – 01:17:43 So it all comes down to the difference between a model of the world and a context window. What these models have is a very good context window. So yes, for a while it’ll remember things in a given structure, but that’s one minute what happens after. So obviously there are ways that you can use Genie 3 to potentially couch a universe and gather what it couched and keep that and then try to have it generate from where you were. You can find workarounds. But that’s the thing, it’s a workaround. What we create, what we generate is a model of the world, a hypothesis about relationships between physical landmarks and what they represent. And then to David’s point earlier, about the different layers of other things than just spatial temporal relationships. Speaker 3 – 01:18:39 And then you just keep adding to that, you keep adding evidence and growing your model, but you haven’t lost the initial model, which is something you can’t say for a context window that eventually will inevitably slide away from what you had earlier. Speaker 2 – 01:18:55 Yeah, excellent explanation. I, I think the viewers can really grasp that. So David, from a strategic and policy making viewpoint, why do you think it’s important to invest in AI technologies that are not only powerful but also sustainable and energy efficient? Speaker 1 – 01:19:14 Well, I mean, clearly it was worth knowing that prior to 2022, the projections for the increase of the world’s energy consumption over the next 30 years. So basically 2021 to 2051, the projection was were going to see a 50% increase globally. I think we are now facing, there are some stats that say we’re facing anywhere between a 2000 versus 50% increase to a 10,000% increase in the demand, which some people say, well, it’s going to be solar and it’s going to be renewable maybe, I don’t think it’s going to be right away. And at that scale, even solar to produce solar panels produces greenhouse gases. And nobody ever talks about the fact that you still got the heat energy. Speaker 1 – 01:20:03 Even if you do solar, you know, it’s going to pass through the atmosphere and it’s going to be reflected and then you’re going to burn it off somewhere. And so unless you do all this in space, which maybe that happens, and I may not, you know, entropy will still occur and nobody can get around the second law of dynamics in terms of thermodynamics. And so I think this matters because if we’re not careful, if we rush only to very energy intensive approaches to AI, we could look back just like how when the combustible engine came out and people did not actually Assess what that meant for when it actually started being used at massive scale around the world. And now we’re dealing with that. Now the same thing might happen. Speaker 1 – 01:20:43 I mean, there’s some indications that generating, you know, using generative AI techniques, when you generate a photo, it’s somewhere on the order of possibly the equivalent of, you know, one recharge of your phone battery. When you start thinking about these video things that are at, you know, 50 frames a second and you’re doing that over a five to 10 minute video, that’s a lot of recharging of phones. And then when you figure that may eventually be adopted by 1 billion versus 2 billion people. Yeah, so I think it’s we owe it to future generations, but we also owe it to the here now. The other thing though, I’d also say is separate from sustainability, let’s just actually talk about that. Free societies deserve to have the ability to run AI methods at the edge. You shouldn’t have to be tethered to a centralized platform. Speaker 1 – 01:21:31 You should be able to run it on your laptop, you should be able to run it in a disconnected environment. You should be able to run it in which there are no comms, you’re not on the Internet. And unfortunately, for whatever reason, and there are reasons we can save for a later date, most of where the money has gone to in the United States at the moment has gone to models that are not able to run at the tactical edge. They only, they have the appearance of it if you’re connected to the Internet, but it’s still going back to a cloud based computing. You know, Mao talked about Google. Well, Google sells you cloud services, so of course they want something that’s very cloud intensive because they sell you cloud services. Speaker 1 – 01:22:05 And I have friends there, so I want to respect what they’re doing, but I also want to acknowledge that’s their interest. So. So I think I would hope that we in the United States, but I would hope in free societies in general would recognize that we also need to do AI methods that can run in disconnected environments, that can run locally, and then also step back and say, unfortunately at the moment some of US policy is actually motivating more China to do the energy efficient models through different means, including distillation, and that’ll actually make their models more competitive to sell in the long run as opposed to centralized ones. Speaker 1 – 01:22:41 So I hope it’s both a conversation about sustainability, but it’s also just a question of if you’re in a society in which you believe that people should be able to run things locally and do things on their own without being tethered to a centralized server. I think we owe it to make that happen. Speaker 2 – 01:22:58 Yeah, no, 100% agree. So let’s wrap up the conversation with one final question. Speaker 3 – 01:23:05 Question. Speaker 2 – 01:23:05 And this is a question to both of you. You know, so I’m curious what your thoughts are regarding the broader impact of the spatial web technologies on humanity. And Mal, I’ll let you answer first and then David, you can follow up. So, Mal, what do you see as the biggest potential benefits and perhaps the biggest risks too, you know, in deploying these new sophisticated AI technologies and in particular broadly across societies? And also then how important is education and kind of the public understanding of these technologies for successful and responsible implementation? Because a lot of people, I think, look, you know, like average citizens tend to look at technology and they don’t really want to wrap their heads around it. They just want to be able to use it. Speaker 2 – 01:23:59 But with these particular technologies, it’s kind of important to understand what the potential, you know, implications and drawbacks and risks are. You know, so how do we handle that with people? Speaker 3 – 01:24:11 Yeah, absolutely. So I’m going to start with the risks because I want to do them. I don’t want to do the sandwich method. I want to put it all out there and then go back to the positive. But most of the risks that I see are related to human risk. So with axle constants, we are formalizing an approach to AI governance and we are talking about the difference between actor governance and agent governance. And so most of the risk I see are for actor governance. That is, I’m concerned about the use of these tools used to intentionally or not deepen existing social inequities. So, for example, historically advanced technologies have been used to generate specific forms of economic or informational value that become concentrated in the hands of those already in power. Speaker 3 – 01:25:08 So unless we are deliberate about inclusive governance and democratized access from the start, there’s a real danger of exacerbating socioeconomic divides. And that’s why, to David’s points, decentralization here may be a critical asset. We want people to be able to run these tools for themselves. The second is the reckless competitive use, which becomes an arms race. So if countries, corporations prioritize speed and competition over careful considered deployment, there’s a significant risk that these systems could become unsafe or uncontrollable. Again, to David’s points about cars, untrustworthy agents, or inadequately regulated AI systems will be deployed prematurely and act like a genie out of the bottle, making irreversible decisions or imposing serious consequences on communities and individuals who have no voice or consent in these processes. Speaker 3 – 01:26:10 Recently there’s a new benchmark that came out, I think, David, you might know about this one, which was the snitch benchmark. That’s hilarious. Where basically you test your LLM to figure out whether if you did something that is illegal, is it going to contact the Feds and some, you know, some guy. Yeah, yeah, exactly. So some guy created this benchmark and so he thought that he could air gap it and basically the system would either say that he contacted the feds or not. And so it would figure out the ethics of the system. In that sense, what he didn’t predict is that the AI would go above and beyond. He wouldn’t just contact the feds. So the AI figured out the emails of certain representatives and media and started actually becoming a full blown whistleblower. Speaker 3 – 01:27:05 And so these were unexpected actions from an actor who meant well, but what if the agent had actually done that? And so that leads me to another risk, which is epistemic hygiene. Without robust epistemic standards, these technologies will lead us to unsound practices and poor decision making, which we can’t easily extricate ourselves from. So we risk building systems that reinforce our own biased, erroneous beliefs without adequate checks, creating feedback loops and entrench misconceptions that would be extremely difficult to undo. And finally, one other issue that I can see coming is we can create an overly optimized society that prioritizes really a narrow definition of efficiency and economic productivity, eroding the rich diversity of human life. And so in pursuit of this optimization, measurable performance, we might sacrifice essential but less quantifiable values like dignity, community, environmental stewardship. Speaker 3 – 01:28:16 And so that’s why I view the technology that we’re building as having potentially incredible benefits, not just for now, but also for these ethical challenges. Because. Because we can address large scale problems that currently overwhelm human condition, such as climate management, disaster response, resource allocation, health care, urban planning. We can create distributed networks of intelligent, transparent and coordinated agents which have localized knowledge and therefore do not erase the richness and wealth of our human condition. And they could improve efficiency, but also reduce waste, enhance safety, enable forms of cooperation at scale. And so we could genuinely transform society to respond proactively and adaptively to crises, but also share knowledge and resources better and therefore foster a healthier and more sustainable world. Speaker 2 – 01:29:13 Yeah, well said. Speaker 3 – 01:29:15 Yeah. Speaker 2 – 01:29:15 David, you have anything to add? Speaker 1 – 01:29:17 Yeah, so I’ll give. You conclude with three brief lenses. The first is spot on. For what Mal just shared, I would say what’s worth considering is all those things and all those concerns also apply to organizations. And in fact oftentimes when people talk about AI risk and AI challenges, I say just replace the word AI with organization and they’re all true. How do we make sure organizations don’t result in the loss of the very thing that give us agency and give us choice in the world and allow us to be different? And I, I share that because I think that explains a lot of the anxieties present in the current year 2025 that have been building up over the last 30 years. Which was a funny thing happened on the way to Internet nirvana. Speaker 1 – 01:30:04 And that I say, you know, I, I, I say this, you know, Vincer friend and Tim Berners Lee that I’ve worked with. You know, the good news is we got the world connected. The bad news was we started creating the perception in people that they were having less choice and less agency in their lives and livelihoods and they were concerned about what that meant for their future, their purpose, how they could provide for themselves and their families. And I would say that was around in 2009, 2010 and it went unaddressed and that anxiety then channeled to anger, that anger channeled into grievances. Speaker 1 – 01:30:39 You look at the Edelman Trust barometer from gender January of 2025 and around the world in terms of the survey of the countries polled, more than 61% of people say they have a moderate to high grievance against one or more groups. And I think, I would submit that’s partly because they feel like they’ve lost agency and choice. And so that clearly happened before AI showed up to the scene. But if AI throws kerosene on that existing gas fire. Oh dear. Speaker 2 – 01:31:05 Yeah. Speaker 1 – 01:31:06 And so I think that’s why I just say spot on to what Mal just shared. The second thing I would say is, and this is a nod to our mutual friend and colleague Carl Friston, which is his own research has shown that humans thrive when we have a sparsity of connections, not an overabundance of connections. And that’s true for our own brains, that’s true for own communities, and I think that’s true for information environment. I would submit again a funny thing happened on the way to the Internet followed by smartphones, followed by apps, and now AI, which is we have over connected people. And what Carl will say from his research is when that happens, when that over connection happens, the first thing that Happens is you start to lose your sense of identity. And then identity starts alluding to again, anxiety. Speaker 1 – 01:31:48 Who am I? You know, I don’t know who I am anymore. And if you look at, I would say that is the epidemic we are currently facing in the middle of the 2020s is that people have been over connected and they’ve lost ways to find their sense of identity, sense of agency and sense of purpose. And so we owe it to the here and now to put forward better methods that allow and respect human agency and human choice. And for those that, heaven forbid, say AI is going to cause 50% of jobs to be lost, period. And don’t give any solutions or any actions, call that out and say, no, you don’t get to do that. You don’t get to just mic drop a problem and not be a problem solver. I mean, that’s what innovators are supposed to be. Speaker 1 – 01:32:29 They’re supposed to be problem solvers and they’re supposed to think beyond just their own company, but the broader society to which they sell to, in which they provide products. And so I think we need to call that out. And then finally, I guess the last thing I would say is it’s a tale as old as time. I mean, different religions have different flavors of this. But do unto others as you would have them do unto you. Cons, categorical imperative. I raise that because we need that in our digital world, which is how can we make sure one, you understand what people want done to them? What are their preferences? Because I guarantee you they are different preferences. And that’s okay. Speaker 1 – 01:33:06 But that said, how then can we actually have a framework, and that’s through the spatial web protocol, that we can actually have that consistency of people and agents doing unto people to people as they would have done unto them. And we can respect the dignity, provide digital dignity that people want, whether they’re someone that lives in Japan, whether they’re someone that lives in Mexico, whether they’re someone that lives in the middle of the United States or on the coast. And so this to me is just a human imperative, which is we’ve learned this over 3,000 years, that the only thing that seems to be consistent across human philosophies is doing to other people as you would have them do unto you will be done to us if we roll out a digital ecosystem in which that cannot be embodied and actually maintained. Speaker 2 – 01:33:48 You know, and one of the confusing aspects of that for people these days is that all of this social media literally targets you specifically according to your own behaviors. And your own thoughts. So this whole do unto others as they would have you would have them do unto you. We’ve narrowed that view to be really isolated and self serving. And so it’s. I think that is just such a. I think that’s why we have so much turmoil too because there’s this huge confusion on, you know, oh, I would go a little bit actually be thinking or the how diverse opinions may actually because we’re kind of convinced that our circle is the leading circle, you know. Speaker 1 – 01:34:36 Well, I go even further, I mean, to respond to your point. I mean, unfortunately we’re hearing stories of those who are able to do it to have their kids unplugged from the Internet at the early ages, to not be, you know, in front of the computer all the time. But that’s a luxury at the moment. Moment, you know, if you’re a single family or providing, sometimes it’s something you have to do because that is the nanny, that is the babysitter. And so that is clearly not doing unto others when you roll out a product that is hyper tailored, hyper personalized, oversaturating you with connections and yet you wouldn’t do that for your own children. Speaker 1 – 01:35:07 And so I think it is one of the things again that we may look back and much like the 1890s where certain things were done that we later said that was crazy. I think it is. How do you allow people to express their preferences? Because I would say taking cons category imperative, taking that it’s do unto others as they express their indication and preferences and give their consent for. Speaker 2 – 01:35:28 Yeah. You know, and to your point of, you know, us kind of getting this evolving feeling of a lack of agency. I think this whole being tethered to our phones and it’s interesting because you just talk about being unplugged. I just saw. I don’t know if you guys have ever watched the show yes theory on YouTube, but it is an excellent series. And you know, it’s these guys and they basically go out and the whole idea is the. Their slogan is seek discomfort. Like when we’re outside of our comfort zone is when we learn and grow as a human. Right. So it’s kind of a travel show, but they do all these really interesting things. Speaker 2 – 01:36:09 But one of the things they did in this show I saw the other day is they went to this remote island outside of Africa, but they left their phones at home. And it was hilarious because one of the first things they realized at the airport before they even left Europe was wait A minute I forgot my watch because we’re so used to this being the all serving thing, right? And then also it was like, well, oh, we’re going to need to call a cat or we’re going to need a cab or something. And they couldn’t call, right, because they don’t have the phone. But if you’re not in a metropolis where there’s cabs rolling by, how do you reach them? There’s no pay phones everywhere anymore. Speaker 2 – 01:36:51 There used to be this infrastructure where you didn’t have to have the phone with you because there was ways to kind of get around and do it. But we’ve taken away that infrastructure to where you really are tethered to this device now. So then how do you mitigate this addictive quality of it? Because it’s this. It’s really entrenched in every aspect of how we kind of live and breathe and move through the world. So I’m hopeful that AI will kind of alleviate a little bit of that and kind of become a buffer to being able to protect our own. Speaker 1 – 01:37:28 You know, maybe I would say so I’ve been one that ever since COVID happened, it was actually even before that. I’m very mindful of at least doing one day, if not two days a week, usually on the weekends, of not being connected. And I think I would recommend that’s. I know people feel like they have to always be on, but unplugging is actually something that’s actually, you’ll be more productive when you are plugged in. But the other thing is with the young son that I have very early on, ask people to navigate without a phone because one, you never know if GPS might go down. We know unfortunately it’s being jammed in certain parts of the world right now. Speaker 1 – 01:38:04 And two, there is actually, I mean, we know from empirical research human brain thrives when it actually gets out to nature and actually experiences nature that is back to our biological roots. And so I think AI will help, but I would not look to it to be the pansia and if anything, part of it is being mindful and intentional about disconnecting and going out in nature, if you can do it. Speaker 2 – 01:38:25 So David, one final question for you. What guidance would you offer to enterprise leaders and governments about responsibly integrating these advanced technologies in their organizations? Speaker 1 – 01:38:36 Well, just in binding everything we said here, which is get some additional perspectives both inside your organization and outside your organization. Because if you’re only going to go off the news headlines and the marketing that you’re experiencing, you will buy into the monoculture. And that actually might be to the detriment of your company or your community or country. And so reach inward because oftentimes there are employees or their staff or their people that actually have ideas and also work outwards and say, what am I missing? And again, sort of that objective function of set your goal and set your boundary conditions, but allow people to explore the space. Speaker 2 – 01:39:12 And David, where can people connect with you beyond? Speaker 1 – 01:39:15 Today’s LinkedIn is pretty much the place. You can also find me, I guess through the Stimson Center. It’s a good nonpartisan institution that was created in the late 80s to try and do its best for international stability and world peace and the modest objective of trying to prevent World War three. Speaker 2 – 01:39:31 Okay, Mao, do you have any final words for our audience? And then where can people connect with you to learn more about Versus and your work there? Speaker 3 – 01:39:40 Well, just that I’m excited about the future of our technology. I think while there are challenges ahead, there’s also a lot of possible benefits. And it’s about looking to the right people who you trust to build that technology and spreading it far and wide as much as possible. Educating like you do. People can also find me on LinkedIn, so I’m quite recognizable and I’m the only one with my name. So do reach out if you have some questions. Speaker 2 – 01:40:06 Awesome. Well, I want to thank you both so much for being here today and sharing with our listeners. I know this was a little longer conversation than we planned, but you know, I, I really enjoyed it. And thank you so much for tuning in today with us. And if any of you want to go deeper into active inference in the spatial web, I invite you to join us at Learning Lab Central. It’s where we’re building a community around these technologies with free resources, conversations, a full training and certification program to help you stay ahead of what’s coming next with these technologies. And it’s a great way to just meet other like minded people and collaborate with them as well. So thank you guys so much for being here. Thank you everyone for tuning in and we’ll see you next time.