The convergence of two groundbreaking technologies is reshaping how we think about AI, automation, and intelligent systems, affecting every industry across the globe. Here’s what you need to know now:
While everyone has been focused on ChatGPT and GenAI, a parallel revolution has been quietly developing at the same time. Active Inference AI is an entirely new paradigm for autonomous intelligence, and when coupled with the newly ratified IEEE 2874–2025 Spatial Web Protocol — a new global standard for the internet, it enables multi-agent distributed intelligence across networks for a safer, explainable, sustainable, and scalable approach to AI. These two technologies converge to create adaptive, decentralized, real-time intelligent systems that far surpass the limitations of today’s generative AI.
This technology shift is being led by VERSES AI, a cognitive computing company developing a fundamentally new approach to autonomous intelligence with world-renowned neuroscientist Dr. Karl Friston, at their research helm as Chief Scientist. VERSES has also been instrumental in the development of the Spatial Web Protocol through the Spatial Web Foundation — a non-profit organization formed around the IEEE public development of the Spatial Web Protocol, Architecture, and Governance.
Unlike current AI that relies on massive databases and pre-trained models, Active Inference mimics how the human brain actually works. It’s based on the Free Energy Principle — a discovery by Dr. Karl Friston— which explains how neurons learn in our brains.
Here’s what makes it revolutionary:
The IEEE has ratified the Spatial Web Protocol as a global public standard. (In the same way that Bluetooth or Wifi are global public standards.)
It is the third foundational internet protocol layering on top of HTTP and HTML. This expands our internet beyond web domains/web pages into spatial domains — a digital representation of physical entities, spaces, and complex systems; it’s the foundation for the Internet of Everything.
What it enables:
Familiar with Metcalfe’s Law? It states that the value of a network equals the square of the number of nodes in the network. We saw this enacted through massive network scaling, first with email (TCP/IP), then with websites (HTTP/HTML), and then mobile apps. Each jump in expanded capabilities (and increased number of nodes and data points) within the internet network introduced a massive new wave of innovation that eclipsed the one prior to it.
Now we’re entering the age of spatial domains where every person, place, or thing, spaces, and nested spaces all can become domains (nodes) within the network— thus, the number of network nodes is about to increase exponentially, ushering in a new era of unprecedented innovation.
When you combine Active Inference agents with the Spatial Web Protocol, you get:
Organizations that understand and adopt these frameworks first will lead the future market. This isn’t about incremental improvement — it’s about fundamental transformation of how intelligent systems work.
The question isn’t whether this will happen. The question is: Will you be ready?
This blog post barely scratches the surface. For the complete technical breakdown, real-world examples, and implementation strategies, watch my full Learning Lab LIVE presentation.
You’ll discover:
“Don’t get left behind. While everyone else is still focused on mitigating the limitations of LLMs, you can lead the future of trusted, efficient, and adaptable intelligent systems.
Join Learning Lab Central for my advanced executive certification programs, scientific research papers, and a global community focused on these transformative technologies. (Every member of Learning Lab Central receives an invite to all future Learning Lab LIVE sessions.) Cut through the GenAI noise and focus on what’s actually coming next.
💡 Learn more about the only available Executive Certification for this technology stack: Certified Specialist: Active Inference AI & Spatial Web Technologies
🔴 Executive leaders who achieve this certification gain a distinct advantage, positioning themselves as confirmed leaders in shaping the next evolution of intelligent, decentralized systems. In a rapidly changing technological landscape, this certification signifies your mastery and readiness to lead in the era of Active Inference AI and Spatial Web technologies.
Contact me directly for personalized consultation, enterprise workshops, and introductions to the VERSES AI team to begin your pilot today. — Reach me on LinkedIn: Denise Holt
The FREE global education hub where our community thrives!
Scale the learning experience beyond content and cut out the noise in our hyper-focused engaging environment to innovate with others around the world.
Become a paid member, and join us every month for Learning Lab LIVE!
Speaker 1 – 00:00 We’re going to do a review. Intro to Active Inference and Spatial Web Technologies now, there are two foundational technologies on the immediate horizon that work together to form trusted intelligence automation for mission critical operations. One of them is active inference AI. It’s ready to build intelligent adaptive agents and this is actionable right now. And, and then there’s the Spatial web protocol, which is beginning to come online later this year to enable secure contextual cross system coordination. This forms a new paradigm for intelligent real time operations at scale. And together they will provide necessary infrastructure to transform industries across the globe. So, introduction to Active Inference and the Spatial Web the next era of computing is going to be focused around decentralized AI. So as a lot of you know, our Internet is evolving. Speaker 1 – 01:04 HSTP and HSML are new protocol layers that are layering on top of HTTP and HTML and they will enable programmable spaces spatial domains. So the number of nodes in the network is about to x exponentially increase. And then context will be baked into everything. Because HSML is a programming language that will program context about all things in all spaces and nested spaces and how they interrelate to each other. Active inference agents will be distributed across networks and this is the next wave of innovation. Through active inference, these agents will be aware of each other and the network. They can sense the world in real time and learn from each other. And this is going to introduce unprecedented levels of interoperability. And what we’re going to see is equivalent to Metcalfe’s law on steroids, with unprecedented scaling and growth potential. Speaker 1 – 02:11 So if you look at the evolution of our Internet so far, we started with TCPIP where the killer app was email. You could send messages from computer to computer, and then HTTP and HTML. All of a sudden you could build websites and you could attract numbers of people to your domain. And now we are expanding into spatial domains where everything in every space can be locatable, identifiable and programmable. So the number of nodes in the network is increasing. Now, Metcalfe’s law says that the value of a network is equal to the source square of the number of nodes. And that’s why when the World Wide Web hit, all of a sudden, businesses could scale exponentially compared to brick and mortar. And then when mobile was introduced, that scaling increased even more. And you saw a lot of innovation coming in mobile apps. Speaker 1 – 03:10 That was not possible before with just websites. And now I think we’re going to see an explosion of innovation like we’ve never seen before. If you look at the last two years of Gartner’s report on AI on artificial intelligence, their hype cycle in 2023 showed that generative AI was just peaking at that time. And if you look down on the left of the chart, multi agent systems and first principles AI was just starting to rise. So that’s where active inference falls in. Now if you look at last summer when they put out their new last summer it showed generative AI had already peaked and it was already on the downside. But if you look at what’s on the left of that chart, all of those technologies in artificial intelligence, all of them will be touched by active inference. Speaker 1 – 04:08 We’re talking about embodied AI, causal AI simulations, AGI, sovereign AI, responsible AI, etc, multi agent systems. So everything about active inference is about to peak. It’s rising and it’s about to peak. And what we’re talking about here is first principles AI, biologically inspired, adaptive self organizing and dynamic continuous learning. Very different than classical AI, which is what we see in the spotlight today. So the future of AI is shared, distributed and multiscale. An entirely new kind of autonomous intelligence that’s able to overcome the limitations of machine learning AI. It’s AI that is knowable, explainable and capable of human governance. It operates in a naturally efficient way, there’s no big data requirement and it’s based on the same mechanics as biological intelligence. It learns in the same way as humans. Speaker 1 – 05:16 And the underlying principles, the free energy principle has been proven to explain the way neurons learn in our brain. And this is already underway. Now active inference is considered a physics of intelligence. Back in 2022 versus put out a white paper versus chief scientist is Dr. Carl Friston. He is the number one neuroscientist in the world. And this white paper lays out a new framework for autonomous intelligence that mimics the self organized systems of nested intelligence found in nature. It’s based on active inference, the free energy principle and the spatial web protocol to create cyber physical nested ecosystems of distributed intelligence that join humans, machines and AI agents on a common network. And back in 2018, Wired magazine put Carl Friston on the COVID and the big title was the genius Neuroscientist who might hold the key to True AI. Speaker 1 – 06:26 Carl Friston’s free energy principle might be the most all encompassing idea since Charles Darwin’s theory of natural selection. So this has all been rising for many years now. And I love this diagram because this really shows that all the current AI that we see right now, it all stems From Geoffrey Hinton, DeepMind and Google bought DeepMind and then all of these other AI companies. They’re all stemming from former employees of Google there, from DeepMind. Now what’s really interesting is at University College London, you had Geoffrey Hinton there, but you also had Carl Friston there, because Carl Friston is at University College London too. But rather than DeepMind, which is an engineering approach to AI, he was focused on something called nested Minds Network, which is a natural approach, a science based approach to autonomous intelligence. And so he’s the chief scientist for Versus. Speaker 1 – 07:35 So he’s working with versus. VERSUS created the protocol for the Spatial Web Protocol, but they donated it to the public, donated the IP to the IEEE and created the Spatial Web foundation as a non profit organization around the development of the protocol as a global standard. So all of that is happening one side, whereas everything else that you’re seeing currently is from a completely different methodology. So this is something that we haven’t even seen come to light yet. Now the Spatial Web Protocol is a global standard for the Internet. The IEEE has now voted on it and ratified it completely. As of a few weeks ago, back in 2020, they deemed the Spatial Web Protocol a public imperative, which is their highest designation. Speaker 1 – 08:32 And what this does, as I said, it layers on top of HTTP and HTML, but it also it enables the convergence of all emerging tech across the network. So Smart Technologies, Web3AI, AR, VR, IoT, distributed ledger technologies and edge technologies. And this is the next evolution of the Internet. The Internet is about to expand both in size and capabilities. And as I said, the protocols are Hyperspace Transaction protocol and Hyperspace modeling language. So the Spatial Web architecture, HSTP manages data flows between interconnected systems and it facilitates interoperability between different autonomous systems and agents. And HSML provides a common language for agent systems, enabling computable context and decision making. And it provides a shared framework for context aware decision making. Speaker 1 – 09:39 So the protocol aims to create a standardized way for different technologies and intelligent systems to interact and share information across programmable spaces, across global networks. And if you look at the way the Internet works right now, World Wide Web domains, you have a website domain, we have a library of these web pages, right? And what do they do? They process data and they serve up data and content. And if you look at current AI, that’s exactly what it deals with. It’s generating content, right? Whether it’s text or images or sound, audio. But it all has to do with the content that’s being served up in our current Internet. Now what’s coming with HSTP are Spatial web domains, this is going to enable the Internet of everything, digital twins of everything. Speaker 1 – 10:43 So a spatial web domain is an entity with persistent identity through time, with rights and credentials defining people, places, things and rules in machine readable context and spatial grounding. And this enables a knowledge graph and digital twins of everything. So all of these nodes in the network become a universal domain graph, which enables a digital twin of our planet and all nested systems and entities within it. So you have nested ecosystems, intelligent agents, both human and synthetic, sensing and perceiving continuously evolving environments, making sense of the changes, updating their internal model and what they know to be true, and acting on the information that they receive. HSML is a cipher for context, so it enables programmable spaces. Anything inside of any space is uniquely identifiable and programmable within a digital twin of earth, producing a model for data normalization. Speaker 1 – 11:54 You combine this with active inference and it enables agents with causal reasoning about their environments that are contextually encoded, so semantically, spatially, temporally, over time. And this gives us adaptive intelligence, automation, security through geo encoded governance and multi network interoperability, enabling all smart technologies to function together in a unified system. The spatial web facilitates a distributed ecosystem of intelligence. So current state of the art AIs, they’re siloed applications, they’re built for optimizing specific outcomes, whether it’s producing text or producing images, or producing sound or video editing. But each system is built for whatever the goal is within that output, right? So they’re siloed for functionality and they have difficulty communicating their knowledge frictionlessly and collaborating with each other. And perception and reasoning are separate, they’re separate functions using separate systems. Speaker 1 – 13:08 Now with active inference in the spatial web, you have shared intelligence that’s at the edge of everything. The processing is at the edge. It’s not tied to a giant database where all of the queries have to process everything under the umbrella of that database that holds the static data. So processing is done across the network at the edge. And you have a self evolving system which includes perception and reasoning in the same framework, learning moment to moment, continuously updating its world model. And this mimics biology while also enabling general intelligence. Ethical and regulatory compliance comes through active inference with the spatial web. Active inference supports ethical compliance by providing explicit decision making explanations. So it’s explainable AI, and this aligns with global transparency and ethical standards. Speaker 1 – 14:10 And with the spatial web protocol baked into it, you have spatial web governance, you have an AIs rating system, an autonomous intelligent system rating framework, and it models human knowledge. So with these standards coupled with this explainable AI, you have ethical compliance. Active inference explicitly encodes each decision’s logic. It supports not only efficiency, but compliance. And regulators increasingly demand transparency. Active inference can supply transparent cause and effect rationales. Active inference the brain’s problem solving strategy so how does active inference work? Agents are continually predicting and adapting to their environment. So they continuously learn from their environment, enabling real time adaptation, which is unlike traditional AI, which relies on the pre programmed rules and static models. Active inference continuously updates its model based on new data, allowing it to adapt in real time. This gives us seamless real time communication. Speaker 1 – 15:33 They share their knowledge and perspectives, learning from each other. And active inference agents excel in handling complex uncertain environments by continuously minimizing uncertainty. This makes them suitable for applications where conditions are constantly changing. The free energy principle is how we learn. Neurons are constantly generating predictions to rationalize sensory input. And this theory, the free energy principle, neatly explains this remarkable feat of perception and learning that is ongoing in the brain. When something doesn’t make sense. These mismatches drive learning by updating the brain’s internal model of understanding of the world to minimize surprise and uncertainty and free energy. You can think of it as the space between what we know and expect to happen versus what we may find out happens. Instead, free energy can be minimized by improving perception and acting on the environment. Speaker 1 – 16:50 And we do this by gathering evidence to resolve that uncertainty, to close that gap on uncertainty and rationalize observations by dynamic beliefs about their underlying cause. In this way, we update the model over and over to improve our understanding of the world. Now, there’s two different sides to inference. One of them is perceptional inference. Now perception is Bayesian inference. It’s a way to update our beliefs about something based on new evidence. Or so you have an initial belief about the likelihood of something being true, and then you receive new information and it affects that belief. Instead of completely changing your belief, Bayesian inference helps you adjust it by considering both your initial belief and the strength of this new evidence. It’s like updating your guess as you gather more clues rather than starting from scratch each time. Speaker 1 – 17:54 And we could see an example of this like with a medical diagnosis. So a patient may have some symptoms and they have their medical history, but a doctor may say, let’s have some X rays, let’s have some blood tests. And that new evidence is going to come into play with what they know about the symptoms and what they know about the history to help come to an understanding of what might be going on with that patient. Now the other side of inference is Called active inference. This involves planning and taking action. We are constantly to gather this new evidence. We are constantly pinging our environment to gather more sensory data so that we can update our understanding of the world. This involves real time learning. We’re adjusting our beliefs based on this new sensory input. Speaker 1 – 18:52 And then we make decisions on the fl, both our learning and the decision making process. And if you look at how we learn from the time we’re infants, as we grow into adults, infants are constantly learning about their environment by gathering sensory information. It’s why babies are always putting things in their mouths. You know, toddlers, they’re always putting things in their mouths because they’re hungry for that sensory data. It’s a brand new world and they’re doing everything they can to learn about it. Now as we grow, we stop doing that because we have already established this understanding of the world. And so now we’re looking for the anomalies, right? What doesn’t fit into that model. And then we measure it against what we know to be true and seek to gather more information to make us more certain about what we know. Speaker 1 – 19:44 But it’s this inherent drive within us. It’s why the joke about pushing the button to find out what it does, it’s hard to resist because we’re constantly seeking to find out and close the uncertainty around us. So there’s a couple of aspects of this. One of them is active learning. It involves choosing what to learn next to get the most useful information and active selection, which is choosing the best option among many. By comparing their predicted outcomes. You have this action perception feedback loop, the perceptual inference, the Bayesian inference and active inference unfold continuously and simultaneously, underlying a deep continuity between perception and action. They’re both sides of the same coin performing the same free energy minimization algorithm. So, so you’re constantly acting on your environment to gather more sensory data to update your understanding. Speaker 1 – 20:52 And it’s this constant feedback loop that just happens and we’re not even really aware it’s happening. It’s just how we’re wired to learn. And that’s how these agents will learn as well. What’s interesting is their sensory data is going to come through IoT and cameras and their world model, their understanding of truth and reality is going to come through the data that they learned from as well as the programmed context within the spatial web and this universal domain graph of all things. And all of that is continually updating as well. So there’s this constant action perception feedback loop that will be performed within These agents that will keep them on this continuous learning cycle within their world. Speaker 1 – 21:50 So a good example of active inference, if you take a wooden table, you might look at a wooden table and go, is that smooth or is that rough? You have an understanding of what a smooth table might look like. But the only way to really resolve that uncertainty about it is to act on the environment. So you can act by touching the table to find out, oh, I’ve gathered the sensory data and I understand, yes, indeed, it is smooth, or maybe, no, it’s not, it’s actually rough. And then you update your understanding of that table and you add it to your understanding of tables that you’ve learned so far. Another example of active inference regarding a predictive mind in action would be crossing the street. Speaker 1 – 22:42 Now, I’m sure we all remember when were in grade school being told about safety in crossing the street is to stop, look and listen. And why are you stopping? You’re looking, you’re listening because you’re gathering that sensory data so that you can make proper decisions. Now, before you cross, your brain is predicting vehicle movement based on sound and visual cues. You have this continuous update to your predictions based on real time sensory information. You predict a safe gap in traffic to determine when to cross. Motor vehicles and pedestrian movement might cause you to adjust your pace and direction. And you can react quickly to unexpected changes, minimizing surprises for safe crossing. And you didn’t need to be trained on thousands of street crossings. In fact, that wouldn’t have helped you here. Speaker 1 – 23:40 You needed to be able to react in the moment to the data that was available to you in that moment, and you inferred in real time based on what’s happening in the moment. Active inference also produces generative models. Generative models within active inference proactively simulate future scenarios to inform decisions. The significance of this is that it provides predictive capability based on logical reasoning and causation. These generative models function like an internal simulator projecting potential of future outcomes. If the real outcome deviates, the model updates to minimize discrepancy. Called free energy, this is an autonomous self correcting mechanism and it’s good for dynamic real time tasks. And an example would be like an airline system predicting disruptions by modeling various scenarios and proactively adjusting the logistics and then refining its model based on success or failure. The spatial web for secure distributed intelligence. Speaker 1 – 24:52 So decentralized AI takes processing to the edge. You have a distribution of processing and power, so super powered GPUs are not required. The processing takes place on local devices throughout the network. And because of this you have faster response times and reduced network congestion. It uses real live data, real time data from IoT sensors, machines and ever changing context that’s embedded into the network. It eliminates the need to tether to a giant database. These agents don’t have to check in with the mothership to process a query. They can do it out in the wild. So active inference agents throughout the network learn and adapt from their own frame of reference within the and this enables enhanced data privacy by minimizing centralized storage and it minimizes complexity. It uses the right data in the moment for the task at hand. Speaker 1 – 25:57 And HSTP has natural guardrails baked in. Guardrails provided by HSTP in the spatial web will create a secure, ethical and user centric environment for AI applications, fostering trust and reliability in digital interactions. This enables data privacy and sovereignty, transparency and explainability, security and authentication, bias mitigation and ethical AI guidelines. HSML is a common language for everything. So all these emerging technologies that have all been rising at the same time, they’re about to be able to converge across networks through HSML because it becomes the bridge language so that everything can become interoperable. And this will provide unprecedented levels of interoperability between all current emerging and legacy technologies. It’s a programming language bridging communication between people, places and things, laying the foundation for a real time world model for autonomous systems. What it results in is ever evolving collective intelligence. Speaker 1 – 27:14 Intelligence at scale, real time, continuous communication between agents, sharing beliefs, coordinating and collaborating with each other. Intelligent agents will represent people from all over the world, sharing their individual and specialized knowledge, all coming together on a common network, speaking a common language. Hsml, efficient and powerful cross communication to perform tasks, regulate systems and address problems in real time. And it scales up and grows in tandem with humans. Now, I’m going to go through these next slides pretty quickly, but I wanted touch on some of these things a little bit further before we end this presentation, just to give you a little deeper understanding. So current AI limitations We’ve talked about this a lot. Static learning, deep learning models require retraining for new data and they struggle to adapt to new or unexpected situations. Speaker 1 – 28:17 They have high data demands, they depend on massive data sets for training, and they’re typically called black boxes. They struggle with explainability, leading to trust and transparency issues. Now, deep learning limitations, they recognize patterns, but they struggle with conceptual understanding. They recognize correlations without understanding relationships, and they recognize patterns, but they don’t understand the underlying meaning. Now, active inference solves these problems through adaptive Learning continuously, updating their models in real time, eliminating the need for retraining efficiency. They require less data for accurate predictions, reducing resource demands. They have explainability, more transparent decision making. With built in confidence measures and contextual awareness, they’re capable of causal reasoning. They react effectively to new and unforeseen scenarios. And with active inference you also get energy efficiency and sustainability LLMs. Speaker 1 – 29:33 We’ve heard all of these calls for restarting nuclear power plants and that they require near nuclear scale computing resources. While active inference operates with natural efficiency. That’s similar to the human brain. Active inference is ideal for sustainable and scalable AI deployment across industries. There’s lower data requirements. It learns effectively from smaller data samples like 90% less data, reducing data collection costs. They optimize energy consumption by aligning processes with real time needs. And it reduces energy intensive operations without compromising the output. So you have cost savings and a smaller environmental footprint. Print Active inference is explainable. So what is explainable AI and why does it matter? Explainable AI, sometimes referred to as xai, refers to autonomous intelligence systems designed to clearly articulate the reasoning behind their decisions, clarifying why they make specific decisions, allowing humans to understand and trust the outcomes. Speaker 1 – 30:52 Active inference is fundamentally explainable. Active inference’s structured generative models ensure internal reasoning processes are explicit and understandable, specifying each variable and its causal impact. They have the ability for introspection, revealing how their decisions are formed. This contrasts with these black box systems that hide internal logic in the deep learning system sphere. And it aligns with ethical standards that require traceable reasoning and Carl Friston’s team. The Versus team put out a paper back in 2023 called Designing Explainable Artificial Intelligence with active A Framework for Transparent Introspection and Decision Making. So you can discover this further by researching that document. Active inference and RGMs. What are RGMs? Renormalizing generative models. They’re a unified scale free framework for AI capable of reasoning, learning and decision making in1 framework. RGMs are a new class of AI models that are based on active inference. Speaker 1 – 32:11 They use renormalization to simplify complex data into hierarchical structures. And they operate on the free energy principle to minimize uncertainty and adapt in real time. So this represents a leap forward from traditional deep learning methods. It reduces data requirements and improves efficiency. And it’s one singular model for both perception and planning. So what is the science behind RGMs? Renormalization and multi Scale Learning so renormalization is a technique from physics that’s used to simplify complex systems to maintain Consistency at different scales. RGMs break down data into multiscale representations. RGMs generalize well across multiple tasks, handling small and large scale data efficiency. So it can understand a complex system on a local level as well as global level and also all scales in between. And it operates on a hierarchical structure. Speaker 1 – 33:21 They’re actively learning scalable architecture over space and time, similar to human thinking, from small details to larger patterns. And I’ll give you an example of this. Say when you’re driving a car, right, you’re not constantly thinking about all the processes that are going on in the that car’s operation. You’re just calling it a car, right? So if you’re driving the car, you just need to understand it’s a car. You’re not sitting there thinking about the, you know, the engine and the brake system and the, you know, injection system and all these different systems that are happening at the same time. You’re just driving it. So that’s kind of the concept of re normalization is being able to see, simplify complex systems into chunks of understanding for whatever the application is at the time. Speaker 1 – 34:15 But at the same time you’re aware of the local and the smaller scale and the large scale aspects of that system. You’re just chunking it into whatever concept you need for the moment to be able to process whatever understanding about it that you need to in the moment. So agents and agency, what is agency? And this is really important to understand because right now we hear a lot about agents and agentic systems and it’s all talking about deep learning. But these deep learning agents, these LLMs, they really don’t have agency. So agency is the ability to perceive, decide and act autonomously in an environment. The key characteristics are causal understanding, being able to understand the why or the how, the cause behind something real time adaptability and goal directed behavior. So LLMs, they do not possess true agency. Speaker 1 – 35:20 They generate outputs based on correlations, but not causal reasoning. And agency is very important for decision making. So if you consider the simple statement like the door is open, what does this mean? Well, a true intelligent agent must be able to infer why the door is open before acting on that knowledge. So the reason might be, well, maybe someone is waiting to enter, maybe the wind blew it open or it was accidentally left open, or maybe there’s an intruder. So each one of these scenarios requires a different response. And if you don’t understand the why behind it, then you can’t make an informed decision. So an agent that doesn’t have causal reasoning cannot make informed decisions. And this is the fundamental limitation of LLMs. They can recognize the that fact phrase, but they don’t grasp the underlying implications. Speaker 1 – 36:24 Active inference agents can infer causes and act accordingly. Risk mitigation and causal reasoning. So only causal reasoning provides reliability in novel or complex real world scenarios. This matters because it enables proactive rather than reactive decision making. Agents learn from interactions, they develop structured world models and they can simulate and predict consequences before acting. And this enables intelligent context aware decision making. That’s not possible with LLMs. So how active inference agents handle uncertainty is one of the biggest challenges for AI. Active inference manages uncertainty using something called partially observable Markov decision processes. And there are four components to this. One is situational belief based on prior states. So it’s what you know up until this point based on all of your previous interactions. And then a current observation of likelihoods about these states. Right. And then a transition belief. A state transition belief. Speaker 1 – 37:47 So expected outcomes post action. And then you decide what action to take to achieve the desired results. So if you look at these two images, say you’re about to take out a boat or you have a shipping container that needs to move from one place to another or a cross cruise ship like the lower picture. But you see that this is the weather outside. Now your situation belief based on prior states, you understand that means that could be dangerous out there, right? If there’s a storm on the seas, they’re likely to be rough seas and that could pose a lot of dangers. So your current observation of the likelihood about these states is well, that could be very dangerous. Right? And then the expected outcome post action. Well you have a couple of options. Speaker 1 – 38:41 If you go out there then you, it could spell trouble, or you could take a different route, go around the storm, or you could wait until the storm passes. So then you decide what action to take to achieve the desired result. So these partially observable Markov decision processes allow AI agents to make optimal decisions even when they have incomplete information exploration versus exploitation. Agents must balance learning new information with making optimal immediate decisions. And an example of this would be an AI powered logistics system adjusting shipping routes dynamically in response to real time weather and traffic data. So this is the last section and we’re going to talk about the needs and challenges of mission critical operations for a minute. Speaker 1 – 39:37 The top five mission critical operation challenges, delayed responsiveness, the inability of traditional systems to adapt in real time to rapid operational changes, system downtime and equipment failures, high incidence of unforeseen failures and increasing costs and disrupting operations. Poor decision making under uncertainty. Difficulty making accurate context driven decisions when facing complexity and incomplete information. Limited coordination at scale. Centralized systems struggle to manage distributed autonomous processes across complex operational environments and high energy and computational costs. Excessive resource consumption by traditional centralized or intensive data processing approaches impacting sustainability and cost efficiency now decision making through active inference AI offers you something different. You have data driven insights. It provides predictive analytics that guide strategic choices. Proactive risk management. You can anticipate and mitigate potential disruptions in operations. Scenario planning. You can simulate different outcomes to inform better decision making. Enhanced global collaboration. You have interconnected operations. Speaker 1 – 41:08 IT promotes data sharing, intelligence and collaboration across business units and systems. Synchronized AI agents ensures consistent and effective responses through a shared understanding of global operations. Scalability supports scaling operations globally without significant retraining. The outcome is better global coordination and problem solving. So why does all of this matter? The world is moving towards adaptive and self organizing intelligent systems. Agile operations need AI that adapts in real time. Resilient systems require causal modeling and situational awareness. Competitive advantage will depend on trusted intelligence at scale. So intelligent adaptive systems are becoming crucial for operational resilience. And adopting these technologies now positions any company ahead of industry shifts offering both immediate and future strategic advantages. Now preparing your enterprise for the transition in the future. Agents will not merely communicate. Speaker 1 – 42:22 They will model, reason and act in real time across networks grounded in the same shared context as the rest of the intelligent infrastructure. And organizations that adopt these frameworks first will lead the future market. Enterprises must assess their current agent architectures and transition to active inference and spatial web infrastructures is essential for a competitive advantage. And you can begin now. You can develop phased adoption plans for active inference, AI and ultimately spatial web technologies as they start to roll out throughout the rest of this year. And you can begin pilot projects leveraging active inference and spatial web concepts. I’m here to help your organization successfully navigate and implement these transformative technologies. You can contact me directly for personalized consultation and guidance. I’m happy to facilitate direct introductions with VERSUS AI and I’m also available to conduct custom in person workshops for your enterprise organization. Speaker 1 – 43:34 Feel free to reach out to me directly to start the conversation. I do have courses available on Learning Lab Central, so if you’re new to all of this and you want to take a real deep dive into the various aspects of these technologies. I have an advanced certification program wrapped around these seven courses, but you can take them individually to your liking. And it’s online. It’s at your own pace, so feel free to check it out and if you haven’t already, join Learning Lab Central. It’s a global education hub, hyper focused on active inference and spatial web technologies, and the convergence of all of these technologies in the spatial web. Speaker 1 – 44:15 And I created it so that we would have a dedicated space to kind of cut out all of the noise of generative AI and really be able to focus and collaborate around these technologies. And there’s also a menu section called Resource Locker, where you’ll find a ton of scientific research papers and videos and interviews and articles and all kinds of education resources that are all free on the platform as well.