VERSES AI 's New Genius™ Platform Delivers Far More Performance than Open AI's Most Advanced Model at a Fraction of…
The Spatial Web AI podcast, hosted by Denise Holt, recently featured Gabriel Rene, CEO of VERSES AI. They discussed the evolution of the internet into the Spatial Web, essentially Web 3.0, integrating various technologies like AI, AR, VR, digital twins, distributed ledger technology, and robotics. This new era is marked by the development of HSTP (Hyperspace Transaction Protocol) and HSML (Hyperspace Modeling Language), aiming to add context to every element in both real and virtual spaces.
A significant portion of the discussion centered around Active Inference AI, based on the free energy principle. Dr. Karl Friston, a leading neuroscientist, heads the VERSES AI research team. This new AI approach is expected to revolutionize how artificial intelligence operates, learning and adapting in real-time, a significant shift from traditional AI models.
HSTP and HSML play crucial roles in the Spatial Web. HSML, akin to HTML in purpose, allows for the contextual and multidimensional representation of data. It moves beyond the limitations of the current web, enabling objects and spaces to carry a wealth of interconnected, dynamic information. HSTP, on the other hand, focuses on stateful interactions within this environment, facilitating complex transactions and updates in the hyperspace.
Gabriel Rene discussed the societal and technological impacts of these innovations. The Spatial Web, powered by HSTP and HSML, promises a more interconnected and context-aware digital environment. It could lead to more personalized and efficient interactions with technology, reshaping everything from personal computing to industrial processes.
The conversation also touched upon challenges, including data privacy and the ethical use of AI. Rene emphasized the importance of user-centered design and control over personal data. The discussion highlighted the potential of these technologies to create a more personalized, efficient, and ethically responsible digital future.
This podcast episode offers a deep dive into the future of AI and the internet, exploring the cutting-edge developments at VERSES AI and the broader implications for society and technology. The conversation with Gabriel Rene illuminated the exciting possibilities and challenges of this rapidly evolving field.
Chapters:
00:00:00 – Introduction to the Podcast
00:02:30 – Evolution of the Internet to Spatial Web
00:06:15 – Overview of VERSES AI
00:10:45 – Active Inference AI Explained
00:15:30 – Karl Friston’s Contribution
00:19:50 – Introduction to HSTP and HSML
00:24:20 – The Role of HSML in Spatial Web
00:28:00 – Significance of HSTP
00:32:15 – Societal Impact of Spatial Web Innovations
00:37:00 – Addressing Challenges and Ethical Considerations
00:41:45 – The Future of AI and the Spatial Web
00:45:30 – Conclusion and Final Thoughts
Special thanks to Gabriel Rene, for being on our show!
And, of course, if you’d like to know more about The Spatial Web, I highly recommend an awesome introductory best selling book written by Gabriel and his VERSES Co-Founder, Dan Mapes, titled, “The Spatial Web,” with a dedication “to all future generations.”
Visit the Spatial Web Foundation and VERSES AI to learn more.
All content for Spatial Web AI is independently created by me, Denise Holt.
Empower me to continue producing the content you love, as we expand our shared knowledge together. Become part of this movement, and join my Substack Community for early and behind the scenes access to the most cutting edge AI news and information.
00:12
Speaker 1
Hey guys. I’d like to welcome you to a brand new show. This is the spatial Web AI podcast. And here you’re going to learn everything you need to know about the next evolution of the Internet called the Spatial Web. It’s basically Web 3.0, bringing all of web three technologies together with smart technologies, extended reality technologies like artificial intelligence, augmented reality, virtual reality, digital twins, distributed ledger technology and robotics, and all of it combined to become interoperable on this new network of everything. There’s a company called VERSES AI. They have created the protocol for this next level Internet. So instead of Http and HTML, this is HSTP hyperspace transaction protocol. And the programming language is HSML Hyperspace modeling language. And what that programming language does is it bakes context into every single person, place or thing in any space, both real and virtual.
01:35
Speaker 1
And it tracks all of the changing conditions and parameters around all of them situational even in any dimension over time. So it’s pretty powerful. It’s going to usher in this whole entirely new method of artificial intelligence called active inference AI based on the free energy principle. Dr. Karl Friston, who is the number one neuroscientist in the world, is leading the VERSES AI research team. And it’s pretty exciting. The future that’s coming today. I’m so excited. I had the opportunity to interview Gabriel Renee. He is the CEO and co founder of VERSES AI. And we had a pretty interesting chat. So I’d like to just turn it over to that interview right now. And thank you so much for being here. I’m excited for you to come on this journey with me.
02:44
Speaker 1
I’d like to welcome to our show today Gabriel Rene, CEO of VERSES AI and The Executive Director of The Spatial Web Foundation and The Chairman of The IEEE Spatial Web Protocol Working Group. Gabriel, welcome to our show.
03:02
Speaker 2
Thank you, Denise. It’s great to be here.
03:04
Speaker 1
Yeah, it’s so nice to have you. So before we jump into today, we’re going to be talking about the Spatial Web 3.0, the Spatial Web Protocol and next level artificial intelligence, all of it wrapped into one. But before we start, I’d like to hear a little bit more about you, maybe tell our viewers a little bit about your background and kind of what led you into the position and the space you’re in right now.
03:36
Speaker 2
Okay, great. I live here in California. I’ve grown up here my whole life. I’m 49 years old. I started working in Tech when I was 19. I started working at an advanced R and D lab called CyberLab in the early ninety s. And CyberLab is kind of an advanced outsourced sort of research and development lab run by Dan Mapes, who’s my current co founder and partner in VERSES today. So, 30 plus year relationship. So Dan started cyberlab. I was like the young kid who kind of showed up there. This is really during the dawn of the cyberpunk era. And the cyberpunk era, for those who don’t know, is kind of like if you imagine about five or ten years before The Matrix emerges, you have this culture that is really reading these Sci-Fi novels about technology and society.
04:35
Speaker 2
And so everything from Blade Runner to neuromancer Johnny Mnemonic, some of these popular books that have sort of posed these notions of like metaverse where there’s AIS and holograms and flying cars and robots and teleportation and all this sort of like these superpowers in these horrible dystopic nightmare features. So like the coolest tech ever in the worst possible situation. So great for science fiction. What was cool about that time is we could tell were building all the precursors of that technology. So the cyberpunk cyber hacker era was really about planting these early seeds. And so I’m a very young, impressionable young man. I’m working in this environment with PhDs and dreaded skateboard hackers and this entire sort of range of people playing with these technologies. My exposure to virtual reality is like mid 90s with Dr.
05:39
Speaker 2
Tom Fernass and the work that they’re doing out of the Air Force. So for anyone here who’s seen Iron Man, the reason that there’s that tie to the Air Force and it’s essentially he’s sort of piloting this personal suit instead of a jet is because the heads up displays, frogman and reality were invented by the work done by the Air Force and University of Washington, and virtual reality was flight simulation. So both AR and VR come out of that. And Dr. Tom Fernass, who’s an advisor to Versus today, was someone I met as a young man and got to experience some of these early technologies. Dan Mapes had a background in AI. He’d gone to Berkeley. As I evolved my career, I got involved in telecom and network optimization that led to Internet of Things.
06:30
Speaker 2
And around 2016 2017, I started noticing what was kind of like the, you could argue the convergence of all those technologies that seemed to be those precursors for this next generation of this immersive intelligent version of the web. Right? And so I call up Dan Maeve. So I hadn’t talked to him in years, and I said, hey, are you seeing what, you know, deep learning had emerged about four or five years earlier with neural nets. Oculus had been purchased by Facebook. There were companies called Layar and stuff that were doing some pretty cool stuff with augmented reality in cities, and Internet of things, capabilities were starting to really emerge. 5g was going to be the baseline for this. So I was in telecom at the time, and all the telcos understood that 5G was actually not about humans as the main users.
07:24
Speaker 2
It was going to be about autonomous vehicles and drones and holographic data sets. So that 200 billion dollar infrastructure investment in 5G was based on the research that every telco in the world made that said, oh, this is the beginning of really the Sci-Fi era. I quit my job after Dan and I talked and said, we’re going to build something at the intersection of that convergence of these exponential emerging technologies. And VERSES was spawned out of that intention. So sort of a 30 year dream around what would happen when all these really powerful cool technologies were able to come together along with the desire to have an outcome where the world didn’t end up in a Dystopic nightmare.
08:07
Speaker 1
Right?
08:08
Speaker 2
Yeah. We want the cool tech, we don’t want that horrible future. So those were the two drives, really, that then started Versus when we first began researching around it in 2017.
08:22
Speaker 1
That’s really cool. It’s funny because I knew Dan back in 2018 and I remember him telling me when you guys had started verses. And it was so interesting because to think of the fact that the infrastructure for what everything everybody wanted to build, right. But the basic infrastructure was missing. So that’s really interesting to me. And I think that we’re seeing that ever since then, we’ve seen you’ve got all these Web three technologies, the XR technologies, but everything is siloed because they don’t connect or communicate together. Why don’t we talk a little bit about the Spatial Web Protocol and how that is going to bridge all of these technologies together?
09:16
Speaker 2
Yes, I guess. For starters, what is the spatial web? So as were trying to figure out that intersection between those technologies, how would AI and Internet of things like many different devices, from cars and robots to cameras to smart home sensors and augmented reality headsets and virtual reality projected content and how would you port between virtual worlds and how would you put three dimensional objects into environments? How would you permission who had access to that content, that information, whether they were human or algorithmic sort of actors? So the problem space was very large and almost impossible to describe to anyone.
10:03
Speaker 2
But because we had this long history of working with all these technologies over decades, we understand and also coming from an R and D background, the promise and really the limitations, like R and D is all about basically identifying the limitations relative to the promise and then trying to overcome those.
10:20
Speaker 1
Yeah.
10:21
Speaker 2
So what emerged was this notion that this third version of the Web was coming. There had been an earlier attempt, what was kind of classically called Web 3.0, that Tim Berners Lee, who is the godfather and creator of the original World Wide Web protocols, HTML and Http, and is the director of the World Wide Web Foundation today. A knight, no less, denied by the queen. Yes, sir Tim Berners Lee had this idea in the kind of mid 2000s called the Semantic Web. And the idea was that the words on the pages that we have today, the ones that have links, implied that there’s some additional information behind them and that you can go see what that link is referencing, but the rest of the words are just text.
11:16
Speaker 2
So what if there was a way to make all of the words on every web page have their meaning and semantics meaning? Like if it was referencing Gabriel and this was in the context of an article about me, that then you would be able to know who that was referencing or if it was talking about a greenhouse. It would define whether it was a house that was green or whether it was a house that was shaped like a house, which is a glass house for plants. And so embedding the semantics, the meaning into the web was the concept. And a bunch of cool technologies were emerging to be able to build these kind of graphs that would embed meaning into the databases. And it failed.
11:55
Speaker 2
It just didn’t catch on, in part because all we wanted to do was go to Google and let Google just take us to a website and we didn’t care what the meaning was. It just wasn’t important enough. And so noticing that I was quite a fan of that idea at the time and had played around with those technologies fast forward to the 2000 and late 2000 and teens now, and the idea was, well, like that sort of metaverse and virtual world narrative, this sort of synthesis of gaming technologies and networking technologies was this idea. Instead of a web of pages, we’d have a web of spaces. And so these spaces would then be inhabited by robots and artificial intelligence and holographic data and information and networks of sensors that could all share and understand context and meaning.
12:44
Speaker 2
And so that semantics needed to be embedded in the world in a way that machines could understand, not that they needed to be embedded on pages so that machines could understand, which was sort of an evolution of that concept. So we started using the term Web 3.0 and we started using the term the Spatial Web to define this next era of the web where all those technologies would start to work in kind of a single network as a sort of evolution of the World Wide Web into the world itself.
13:12
Speaker 1
Yeah, very interesting. So that then kind of brings us to the protocol. The HSTP, the hyperspace transaction protocol and HSML hyperspace modeling language. So what you’re talking about basically is the HSML and how it informs the HSTP to provide all of those contextual cues for everything in any space over time. Correct.
13:42
Speaker 2
That’s a great way of describing it. I love that.
13:45
Speaker 1
So why don’t maybe talk a little bit about that and what kinds of things the HSML, what kind of context it does provide, how that will work within what you were talking about the permissioned spaces and really kind of baking security into the structure of the spatial web. Maybe just give the viewers a little bit of insight into that.
14:14
Speaker 2
Yes. So I guess HTML more or less enabled the ability to define in a standardized way the layout of a page and then header body footer. The way that we structured websites was based on a way of expressing or how the representation on those pages should go. And that was a standardized format. And then links, which then could connect to other pages, which would use Http to kind of connect those addresses between different pages. So we created addresses for pages. We created ways of more or less marking up those pages, hence hypertext Markup language, and then way of linking those and being able to sort of manage in a way the changes of shifting between one page and another, which you can almost kind of think of as teleporting, even though the page just comes to you.
15:16
Speaker 2
So the idea was to kind of as a nod to those early web protocols to take the concept of hypertext, which was the ability to put a link into text that referenced some other location. And this is kind of an evolution. If you think of the Dewey Decimal System in a library, there’s an addressing system for the library. And you can go to the librarian who is Google and say, hey, I’m looking for a book on psychology. And then they can direct you to that. And you can kind of go in and find by section, by topic, by genre, and then you can kind of buy and then by alphabetically the book that you might be looking for. That addressing system is very cool for trying to find a book.
16:05
Speaker 2
What the World Wide Web did is take that library, stick it in the cloud in cyberspace, and instead of having an addressing system for locations, for sections, for books, it’s like you could go to the individual page and go to one word and then link that to another page in another book across the library. So this library metaphor of a way of locating information in a library is extended to the concept of the World Wide Web. But instead of just trying to find the location of a book, you’re locating a word inside that book on a specific page, and that’s linking to another page somewhere else. And so Http was a way of essentially doing that.
16:53
Speaker 2
And then the domains that had existed before the World Wide Web actually had emerged because we’d been using them for email addresses and others allowed us to essentially have this sort of addressing system. So a replacement of the Dewey Decimal System, you replace the librarian with Google, you get super granular, so you can get a single word, and you can link that to another reference somewhere else. That’s kind of the World Wide Web in a nutshell. HTML let you format those pages in a very particular way. Now when you take that and you extend that concept, which we did, to hyperspace modeling language instead of hypertext Markup language, then you make every space linkable, you make every object in the world, every object around, you be able to have a reference to more information about it. Like the plant.
17:44
Speaker 2
You’ve got a planter behind you. You’ve got a plant, right? What if I want to know information about that plant? Well, today I actually can do something pretty cool. I can take Google Lens and I can scan that, and it’ll tell me what kind of plant it is. Do you know what kind of plant that is?
17:58
Speaker 1
Oh, behind me?
18:00
Speaker 2
Yes.
18:01
Speaker 1
It’s not the plant I thought it was going to be. I don’t know the actual plant name for it, but they’re like the Swiss cheese ones with the big leaves with the holes in them.
18:14
Speaker 2
With the holes, yeah.
18:15
Speaker 1
What I thought I was buying, none of these leaves have holes in them.
18:19
Speaker 2
You got duped. I did. Well, Google, with Google Lens, it can give you sort of use a visual form of search, which now we’re starting chat TPT four is going to take this to the next level here, but a visual form of search to tell you something about generally what is this plant? What it’s not going to be able to do is tell me anything about that plant. When is the last time it was watered? When did you buy it? How much is it? Who owns it? How hot is the room right now? How much moisture does it have right now? All of the other dimensions, which we call hyper spatial dimensions, are all of the attributes relative to any object.
19:01
Speaker 2
So the ability to if you can kind of think of this, every object in the spatial web becomes an object that can have an infinite number of references and information linked to it. And then if you were to look at it with some augmented reality glasses, or if I’m a camera of an AI looking at them, maybe I do have that information. Or if I can combine it with other sensors in the environment or sensors you have in the soil and start to get a more holistic understanding of the plant. So how would a robot know when to water that plant? If you left the house and were gone for two weeks, it would have to keep track and monitor. Or you could either three options. You could put it on a rule and say, well, just go do it every four days.
19:45
Speaker 2
You could use artificial intelligence in a sort of form of a neural net to be able to predict based on a set of criteria, and then it could do that. Or you could use this sort of approach and let it figure it out based on the feedback that the plant is giving it, based on things it can see, things it can test in the air, in the water, in the soil and nutrients or any other sort of factors. Right. If the power goes out in the house. If there’s no water available in the house, you want an AI that can ultimately adapt based on context.
20:22
Speaker 2
So if you can start to embed more information into the objects around us, the sort of digital twins, a sort of Wiki record for everything, then different participants of the objects in the world around us can essentially read and write to those in the same way we read and write to web pages now. Those just become instead of the pages and the content on the pages, it becomes our spaces and the context and information of all of the objects and activities and people and even virtual sort of objects in those spaces. And that’s what the HSML essentially lets you model that in a particular kind of language, where you can model any sort of object or user or policy or activity or the state or properties of anything. What color, what shape, what size, what weight, what texture.
21:10
Speaker 2
It seems crazy until you think of what game engines can do now and essentially render hyper real things with physics and water flowing. So all of that can be expressed in computers. We created a language to do that. That’s HSML. And HSTP essentially is a communication protocol, but it’s what we call a stateful protocol. It means that it’s called hyperspace transaction protocol. Now, Http is hypertext transfer Protocol it’s really just sort of transferring information between two points. Hyperspace transaction protocol suggests that there’s a change of state. HSTP basically says, like, what’s all the information in this environment, let’s say in this particular space, it can query multidimensionally. So it can query not just like, weight and size and temperature, but can query ownership or mood or any sort of dimension.
22:10
Speaker 2
One can imagine that if there’s data relative to that it can capture that come from sensors that could be embedded in a link just as a coordinate position relative to that plant or relative to you. And just like, you could pull that information off a Wikipedia page, and now you could do it off any point in space or any volume of space, and then you could say, okay, well, relative to the rights or permissions, I have to change anything in this environment. It’s color, texture, shape, price, volume, size, weight, location. So then you have to kind of say, what am I allowed to do? And then when you do it updates that state. And so now then, if I come, if you move the plant, the robot decided to move the plant.
22:48
Speaker 2
It could update that, and you could actually ask your house, hey, where’s that plant located? It’s like, oh, it moved it to the living room so it could get more light. Because we figured out that it wasn’t getting enough light, and that’s why it’s not showing those little holes in it, because that’s what’s required. I don’t know if that’s true.
23:09
Speaker 1
It could be, though, right? Yeah, that’s really interesting. So knowing that trillions of sensors are coming online in the next ten years, this is pretty powerful. Then I’ve heard the spatial web referred to as the Internet of everything, the network of everything. So really, then when you’re talking about spaces and you’re talking about trillions of sensors in everything and everything having context baked into it and the permissionable structure, then we’re moving into this space that’s going to be like the augmented reality space that we all envision. That’s not really possible right now. Everybody wants it and imagines it, but this is really putting the infrastructure and the tools in place to make our lives will be an augmented reality.
24:08
Speaker 1
So in my understanding of it, then, it’s like, it’s not going to be like these there will be metaverses that will be like gaming worlds and things like that. But our entire existence will become kind of this gamified world existence, then. Is that a correct assumption?
24:30
Speaker 2
I mean, you said a lot there.
24:34
Speaker 1
Sorry.
24:36
Speaker 2
No, it’s brilliant. Our lives are already gamified. Are you a good citizen? Are you a good mother? Are you a good daughter? Are you a good friend? There’s all kinds of carrots and sticks built into the world and society that gamifies the whole thing. Your ability to personally gamify things is a bit more constrained. But if you want to reach a certain weight goal, or a certain fitness goal, or a certain intellectual goal, you certainly can read a book or train in a marathon or adjust your diet. And so we know we give ourselves a little reward. I’ll eat a little bit of chocolate. I’ll go on that vacation. I’ll learn that new language and be able to finally date Portuguese people and feel cool when I’m in Brazil, or whatever your personal fantasy is. I’m not saying that’s mine.
25:34
Speaker 2
I’m just saying maybe it’s somebody’s here that’s listening about. I would like to use AI to learn Portuguese. I’m just pointing that out. It’s a very lovely language. So I think that this idea that when you immerse yourself in a book or a Netflix show at night, you’re going into that world. And so you’ll go into virtual worlds that’ll be architected by humans, where there’ll be lots of human interactions and sort of gamified, character driven situations as fantastic as the ones that we’ve been writing about since the dawn of our earliest stories and Cape paintings all the way up to whatever version Harry Potter’s on or whatever the kids are into these days. Then there’s things where you’re interacting with the real world. Well, what you’re looking for is an enhancement to that world. I need some information. I need some context.
26:25
Speaker 2
I need to understand what this thing is, and I need to understand where it goes. I need to understand what they’re saying. I need to understand where so augmented reality becomes a tool for extending your capabilities into the world by using digital technologies that are context aware. And then virtual reality becomes sort of environmental, immersive experience that is not that different from reading a book in the sense that your mind is now experiencing as if you are there. And so now what you want to be able to do is go back and forth. Like, perhaps you want to water the plant or pull the leaves through the robot.
26:57
Speaker 2
That when you’re a thousand miles away, or you’re in, I don’t know, Brazil, and you want to be able to actually what’s called do telepresence, where you’re going to go and use the robot to clean out the dead leaves in the plant. Now you’re going into virtual reality, into an actual robot and seeing through its eyes a virtual version of your plant, which you’re pulling from thousands of miles away. So these sort of hybrid scenarios, which there are many, means that there is no real these concepts of augmented or virtual reality are just sort of industry narratives that have different demands from a technological perspective. But ultimately there’ll be these many different synthesized versions of that, which is why the spatial web tries to say, we don’t care about industry specific narratives, we don’t care about technology specific narratives.
27:47
Speaker 2
What we’re interested in is what are the ultimate set of uses and value that can come from this for humans and for humanity. So we take a kind of a more first principles based approach to this, and it allows us to project into the future scenarios that aren’t just about kind of a single siloed approach to it or narrative. That is the Internet of virtual reality, aka the metaverse, or the Internet of intelligence, aka AI, or the Internet of things, which is all the physical things, or the Internet of non things, which is all the holographic and virtual stuff. Like, even in the book, we talk about this notion of the Internet of everything.
28:28
Speaker 2
And that’s a very different design principle to start from, but it does enable these very interesting kinds of use cases and millions of more that we could never imagine, just like Tim Burners really couldn’t imagine what’s happening today with the World Wide Web.
28:45
Speaker 1
Yeah, okay, so we’ve touched on AI, and I’d like to kind of dive into that a little bit because all of this is leading to basically the Spatial Web protocol and then VERSES operating system KOSM. And this methodology for AI called active inference AI, and all three of those together are literally laying the foundation for a type of AI that is very different than a lot of the celebrated AI tools that are being created and kind of implemented into the public right now that are really powerful tools. But from my understanding, it’s nothing compared to what is in store within these three things coming together within versus within the spatial web. So why don’t you tell a little bit about that? We can lead into that. I know HSML is a huge factor informing this AI organism. So maybe tell our views.
30:00
Speaker 2
So one way to think about it is sort of what is the point of technology in general? And let’s just say computing, class of so called class of computing, of which AI is a very good representation of in the current hot topic. So if we go back to, say, the Abacus, what you’re trying to do is do some math and your brain is not as good as you would like it to be, and your memory is not as good as you’d like it to be, and you can’t hold that many numbers in your head.
30:34
Speaker 2
And so you use this little tool, a machine, to basically project and hold your states, the memory, the numbers, and then be able to kind of compress that in ways that let you get to certain mathematical outcomes that were easier to do than to just do in your head. So when you fast forward to the personal computer from there steve Jobs classically called it a bicycle for the mind, but most people don’t know what that reference was. And it turns out that reference came from some research he’d read that said that the amount of energy used to travel the most distance for any animal since around their mobility put humans pretty low on the I think the cheetah or the falcon, maybe was like the one who used the least amount of energy to get the greatest distance.
31:38
Speaker 2
But he said the researchers decided to do one additional test, and they tested how far a human could go on a bicycle relative to the amount of energy that was used. And it turned out that was faster than every other animal. So the human mind invented a tool that basically evolved it beyond the rest of the animal kingdom. And so he said the computer was the bicycle for the mind. So what he really meant was the efficiencies, the computational efficiencies of not having to do it in your head could be put into a computer. And that kind of tool meant that the kind of output you could get was exceeded, what the limitations were. You kind of fast forward from that. You connect all those computers into a network. Now we’re all sharing information. So now you have collective benefits.
32:25
Speaker 2
And you’ve seen that, I think, at planetary scale, with the benefits we’ve seen from the World Wide Web, handful of deficits as well. Fast forward to this sort of AI breakthrough moment of the era. I suppose you can kind of go through the smartphone age there in the middle, around the mid 2000s, where suddenly you’re taking that computing with you, right? Yeah. And that’s extending your capabilities into the world around you in ways that certainly couldn’t before. And the things that you could do with that and the type of companies and businesses that emerge completely change. Now I can order a car. I. Can order lunch. They’ll bring it to me. They’ll pick me up. They’ll take me wherever I want.
33:03
Speaker 2
There’s capabilities that didn’t exist when you were just on the World Wide Web, nor just with the computer, certainly not with the Abacus. Fast forward to 2022, and you get this breakthrough with generative AI. What generative AI is more or less doing is creating this transition where the interfacing, the interactions with computers, that paradigm completely changes. Here’s what I mean by that today. From the advocates until about now, you have to be the main thing interacting with the computer. And so you start with sliding things around. We used to do punch cards. Then you get, like, the GUI. That was the big breakthrough with the PC. And then you got keyboard and mouse. You get to the smartphone, and now it’s like touching and swiping. Now all of a sudden, we’re using language, right? You can just talk to it.
33:58
Speaker 2
This is the most popular form of communication since we began, right? And so instead of human to machine interfacing, we’re actually going human to AI to machine. The AI, or the agent itself, is going to become the main interface to all of our computing. We just saw this yesterday with Microsoft launching their new Copilot. Copilot now will write the Word document for you. It will generate the photos and the whole outline for your PowerPoint. It will basically track all of the context in your team’s meeting. And so if you show up 20 minutes late, it’s like, hey, here’s what they’ve been talking about. Summarize that last point for me. You’re looking at somewhere between two and 100 x. So let’s just say relatively like five to ten x in the near term, less effort to get the benefits of computing.
34:57
Speaker 2
And now the AI is going to be the mediator. You’re not going to order the sandwich from DoorDash. You’re not going to go into the app. You’re just going to tell your agent, go get me a sandwich from Mendocino Farms. You’re not going to order the Uber. You’re going to be like, hey, get me an Uber. It’s going to know where you’re going, because you were just looking at it, and you booked the tickets and then put in your calendar that you’re going to the concert downtown. And so it knows, maybe it’s anticipated that for you. You’re not going to book your ticket to Brazil and live there for two years happily and quietly in a jungle and not deal with any technology. It’s going to book that for you. Right?
35:37
Speaker 1
Yeah.
35:37
Speaker 2
So AI is going to become the mediator, because what we’ve solved is the context problem of being able to communicate human intent in our most natural format to a computer that understands enough about that intent to be able to operate on our behalf and speaks enough computerese to be able to interact with the computers. And that’s actually what’s happening right now with AI is that the interaction paradigm is completely shifting, which means you don’t go to Google to search. You just ask your agent. And it basically goes to Google and searches. Yeah, right. World Wide Web starts to fade into the background as this sort of data, right?
36:20
Speaker 2
And the ability to then have something that can interact with apps and websites and even devices, turn off my temperature while I’m gone in the house, and turn it back on when I get picked up at the airport, since, you know, I’m in the car, you sent the car for me. So this whole new paradigm is powerful. Now, the current limitations of the state of the art of AI, which is classical machine learning, which uses these neural nets, is basically like a giant factory in a computer where you want to get a particular output. And let’s say it’s like in a factory, you want to get a car as the output.
36:56
Speaker 2
You put a bunch of raw materials into this, and then everyone in there kind of does their part, and they hand it to the next one, and then you get a car, right? So this kind of input output, just imagine there’s billions and billions of little people called neural neurons inside of that factory. Now, none of these neurons know what the other neurons are doing. They just kind of pass it to the other one. And if you give them a million years, they’ll figure out how to make a card that you want, because every time it comes out the way you don’t want, you go, no, I don’t want it like that. And that’s essentially how we train AI today. We give them millions of examples, and we kind of push it through this virtual factory of billions of little neurons that guess.
37:37
Speaker 2
And then when you’re done, you get this sort of output. Asks me a question about, well, it sounds like AI is going to beneficial for individuals and not just for sort of corporations, which has largely been what it’s been used for the last ten or 15 years, which is exactly the case. And the way to think of this, in my mind, is that new interaction paradigm makes it so that there are no real apps in the future. There are no websites, there is no UI. None of these things are built by someone else who’s trying to get you to interact with their system. Everything is personalized for you. It’s a custom UI that your AI generates for you in real time based on the context you’re operating under.
38:19
Speaker 2
You’re interacting with it in the style and language and gestures and biometrics data that you’re providing, that it’s learning about you and adapting to. Which is why active inference, which is the kind of AI that we’re working on, which is an adaptive, self learning AI, which is not what the current state of the art AI is able to do, is so important because you want that to be able to be something that can adjust and adapt to and learn about you. Something that’s not just human centered, which is helpful, but user centered around you. So the personalization of computing now, as opposed to an industrial approach to computing is what’s happening. And it kind of started with the smartphone because you get the smartphone. I happen to have my own cover and my wallpaper.
39:01
Speaker 2
I’ve customized my apps, but that’s a certain level that’s like back to the library and the Dewey Decimal system. I want to go all the way down to individual words. In fact don’t even want that user interface. I don’t want apps to look like that. In fact, I don’t even want to go into an app. I don’t want to go into a menu. I don’t want to search. I don’t want to type. I don’t want to do any of that. I just want to be able to ask for what I want and have something else do that. So what you get is this capability set where you’re going to be able to customize an agent that can know anything about the world and be capable of playing many different roles. Certainly there’ll be services that are providing these agents as services.
39:38
Speaker 2
So whether you’d like some help in that fitness plan of yours or to plan that trip to Brazil for whatever reason, one might want to go to Brazil. And it will basically be able to be that travel planner. It’s going to be able to be that fitness coach. It’s going to be able to be your self help guru. It’s going to be able to advise you on how to learn that and teach you that new language. Children are going to have custom tutors. You want to learn any new skill set from how to tie ropes differently to how to change a tire, to how to basically learn what the Platonic Forms are or theories behind quantum mechanics. It’s going to do that. Right?
40:25
Speaker 2
And so this whole ability to have this change in the interface where it’s not about you interacting with a device in order to interact with software, in order to get information or be creative or communicate, that’s going to happen through this agent who’s basically going to be kind of a cloud agent. It’s not going to be tied to one device. It’s going to be in your browser, it’s going to be on your phone, it’ll be on your card, it’s going to be on your watch. It’s going to be everywhere. It’s going to be Jarvis for everybody, not just for of the billionaire philanthropist Playboy, but for eight year olds in Syria. Right? That’s the goal. So how do you democratize the most powerful software tools in history?
41:03
Speaker 2
Essentially the power of tools in general, which is what humans kind of a central organizing principle of humans is tools, language, is one of those tools. Now you have this ability to have the ultimate sort of orchestrator that can take any sort of form or shape that you want, and essentially for the purpose of up leveling and upgrading your life based on what your goals are. And I think this fundamentally transforms civilization beyond that. Of course, you can into physical things like robots and cars and refrigerators and drones, and they can operate those things. You can stick them into NPCs and virtual characters, and they can completely embody and grow and become that character and live 100 years as an ever growing character in a virtual world with its own sort of life and identity. So that’s where this is going.
41:53
Speaker 2
And so you can see just from the point of the metaverse, as you mentioned, is that AI is the gateway to the metaverse. It’s not the other way around. It’s going to generate the environments, it’s going to generate the world. It’s going to basically help you generate the characters. What are people going to do? Creativity, inspiration, ideas, goal direction. These are still tools, and we’re directing these tools. They’ll be increasingly become collaborators with us, but that’s because we’re training them to take over that labor in the same way the industrial age did for physical labor. Make a machine like the Abacus or like the Plow or like a car. I don’t have to walk from here to New York. I can get in this machine that will take me there. One that flies, right?
42:38
Speaker 2
And that reduces all that effort of me trying to walk to New York. That extension, that augmentation power becomes now cognitive labor, mental labor, the amount of time it takes to think through and do something. And actually to get that goal, you still got to have the goal. It’s not about the agent having its own sort of independent goals. Its goals need to be aligned with you, which is why alignment is so important here. But that’s what’s happened. I think the paradigm shift that is required there is that it’s not someone else’s data, it’s your data, that you own it, right? And that you’re generating it, and you own it and it’s yours. So we have these laws around HIPAA Medical and Copa Children’s privacy and data that needs to extend to all data relative to your life that you generate.
43:27
Speaker 2
And we need to draft laws around this. They need to be supported in every country in the world, which is partially why you develop standards that give them a place to aim in order to fulfill that. Everything sort of shifts, and suddenly these services, they orbit you instead of you orbiting them. Right? Now, you don’t have a way to log into the web. You have to log into Google, you have to log into Apple, you have to log into Facebook. You don’t even have an account on the web, do you? Right. So in the spatial web, this should be your account. It should be your data. And then you can decide who gets access to that data. You can revoke access whenever you want. And that AI is just operating on your data. And if you’re like, I don’t want that AI anymore.
44:11
Speaker 2
You should be able to bye and get a new AI. Maybe you only want AIS that come from open source companies. Maybe you want AIS that you need to be able to see how the neural net architecture works. Maybe you only want ones that use active inference or some combination. You should be able to make that decision, but the data should be yours. And this paradigm is something at versus that we’re completely supporting. It’s a big part of what we referenced in the book. A lot of the standards are built around this idea that the data model is going to shift. Now why? Because what we’ve been doing in Web 2.0 is essentially monetizing attention. And in order to monetize attention, I’ve got to point someone at you based on what you’re interested in.
44:49
Speaker 2
So I’m just trying to get your attention and your data so I can sell your data to someone else, so I can aim your attention at their product and I get a piece of that. That’s Facebook’s business model, that’s Google’s business model, that’s an advertising based model. Now, what we talk about in Web 3.0 in the spatial web is a shift from an attention based economy to an intention based economy. What’s important here is the agent is able to understand what your intention, not what your attention is, but what your intention is, and then help you work towards that outcome and goal. Now, as a business, if I’ve got products that will be helpful for you relative to your intention, I’m aligning with you because I can sell you something that will help you do that. You want to drop 15 pounds? Gym membership.
45:33
Speaker 2
By the way, where are you located? I’m 5 miles away from you. I’ll give you 15% off for the first week. Oh, you’re a new mother who’s just trying to get back in shape. We give 25% off to new moms. No one else is doing that within 20 miles of you. So you’re like, cool, that’s the one. You can start like Lending Trio and these other services where you put your information out there and say, I’m open to receiving offers. I’m looking for this kind of bank loan and this guy, I’m looking for this. Imagine that for everything in your life. This flips the entire paradigm. And so this becomes more user centered, more human centered approach to computing, more user centered approach to the monetization of our relationship with these technologies.
46:16
Speaker 2
I believe it’s going to help reduce the sort of overwhelm and information overload that we have been experiencing in Web 2.0. The insidious sort of monetization of our attention that drives us for likes and followers and all these other things that we know are having some negative psychological effects, harmful effects, especially on young women. And so this gives us a different approach that is not about trying to go hunt and peck for information in this giant library in the sky where we got to go find something in Google while being shot at.
46:56
Speaker 2
With notifications and updates from a million different apps and emails and whatever to something that suddenly takes that position on the field, acts as the main filter, acts as the gatekeeper of our data which we own, basically goes and finds the things that are most useful based on our intention and our goals. And I think that this is the kind of outcome that we really want. We start talking about the benefits of artificial intelligence. If it’s not for us to aim these technologies as opposed to just being aimed at by sort of the markets, what are we doing? And we’ve seen the significant negative effects of Web 2.0. We’ve seen some of the negative effects of Web 1.0. A lot of this was the result of a lack of planning.
47:37
Speaker 2
It’s that these technologies were developed by academics and engineers that did not presume bad actors or Opportunists or even just well intentioned businesses that found themselves making a lot of money and shareholders like that and they just can’t get out. Capitalism has its limitations. So this is a different paradigm that really shifts things into a human centered and user centered approach where the benefits of technology become exponential and hopefully as an intention based economy helps drive us all to be a little bit better in our lives. I think that when you’re in a network that has network effects so you can actually not just get smarter by yourself, but you can get smarter together, which means that you can start to get closer to a smarter world.
48:24
Speaker 2
If it becomes about personalized outcomes and personalized content and personalized information, then it’s not about a winner takes all or the long tail opportunities that a lot of people thought were going to merge more with. The World Wide Web suddenly become quite valid because you end up picking the top four products just like everyone else, because they’re the ones who have the most access to you and can push the most information to you. So he’s going to default to that. But the best fit for you was down here. It was number 34 in that set of options, but you didn’t know that. You don’t have the time to sort through that and they don’t have enough money to get to you.
49:03
Speaker 2
But if now the goal is matchmaking for improvement and you’ve got an AI that’s doing that work, which can do 24 hours a day with no sleep for pennies, yes, you’re exactly right. It enables long tail and mid tier mass making opportunities in free markets that actually drive more personalized, more customized, better fit for everybody. And that is incredibly exciting. And I think that when we think about this third evolutionary age of the web, that’s one of the benefits, personalization and better alignment with both technology and the markets.
49:39
Speaker 1
And then what Web 3.0 is going to bring to us in the way that with Web 2.0, all of the power is in the hands of whoever owns those domains, right? Because all of the information exchange is happening on those endpoints of the web pages, the websites within those domains. So maybe talk a little bit about this technology behind the Spatial Web Protocol that is going to shift that to where now all of these transactions, all of these changes and circumstance markers like all of these things that inform the context of the relationships between all of the people, places and things in all of these spaces. That’s what’s going to ensure the security of you owning your own data, of you being able to be in charge of the permission of what happens to your data.
50:36
Speaker 1
And how much ability do we actually have in that type of a circumstance?
50:42
Speaker 2
Well, you said it so well. I’m not sure that if I could do a better job at it. Let me just maybe highlight this idea of domains. So today the World Wide Web is free. The protocols are free, the domains cost money. So if I want Gabrielrene.com, I’ve got to go buy it. And so the real estate for the web is actually a market and it’s relatively affordable depending on the popularity of the domain. Right. There’s not a lot of demand for Gabrielrene.com Amazon.com. You couldn’t buy all the money in the world. So the domains in the spatial web are not pages, right? And the word domain actually comes from region. I meant like a physical property, like a domain, like the domain of the king or the domain of the lion.
51:42
Speaker 2
This means like the space, like the environment, right, that got translated to the concept of domain relative to websites. There’s also the sort of domain and the concept of a field, also a physical field, but the field of, say, science or the field of art or the field of engineering. So sort of conceptual domains, logical domains. When the spatial web domain can be any one of those things, it can be a person place thing. It can be a concept and all of the concepts related to that or subconcepts, for example, like Apple Computers is a domain. They own copyrights and they own trademarks and then they own products. IPhone is a sub product of Apple. This would be like a subdomain on a website.
52:34
Speaker 2
It would be like go to when you go to their website and you look up iPhone, it probably says Products iPhone. In the spatial web, these domains apply to every object, every concept, and they’re all linked together in this sort of shared index, which we call the universal domain. Graph. And this is kind of like the index. Today, there is no really open index for the World Wide Web. You go to Google, and now Microsoft would like you to go to Bing. They index the portion of the Web, and you go and search over their index. It’s not the whole web. It’s a portion of the Web. Right. They’re also missing the Deep Web, which is in between those two, which is the Deep Web. Is everything online that is behind a password, right.
53:32
Speaker 1
Okay. Yes.
53:33
Speaker 2
So all of your personal information is behind a password. All of the information in your apps is behind a password.
53:40
Speaker 1
Isn’t that like 90% of the data that’s out there?
53:44
Speaker 2
That’s correct. It is 90 plus percent. Now, this isn’t counting information that is on IoT devices or being captured by other physical devices or stored in databases or all of the businesses and all the enterprises with all their data in it. So when you really want to think about how to operate in the world and how to share information, embed information onto people, places, and things, define who can or cannot access that information or be able to share that with anyone that you want in any sort of circumstance, it can’t be limited to what’s on the Web. The Web is like 1% of the less than that. Right. The idea that then you’ve got these domains that are tied to relationships between things, not just sort of arbitrary links between pages. There’s no hierarchy on the Web, everything is one click away. Right.
54:33
Speaker 2
But in the real world, the distance and space between things actually matters. And whether something is in my house or in my neighbor’s house actually determines maybe whether or not I can use it or whether I own it, whether I have a right to do it right, or whether or not I have permission to do it, or vice versa. So the location of objects in space and time are actually a very big deal, and that is a hierarchical structure. It turns out that in the domain of my life, I own a property. That property is in Los Angeles. Los Angeles is in California, and California is in the United States.
55:07
Speaker 2
As much liberty as I think I have around what I can or cannot do in my house, there are things that, if they were done, would be illegal at the state level or the federal level. So you have to factor in the relationships between things at multiple scales. And so you need domains like United States could be a domain. A subdomain of that would be California, a subdomain of that would be Los Angeles, and a subdomain of that would be my home. A subdomain of my home would be my office. And the actual position that I’m located in my office right now is this exact XYZ position in this room. And so if you wanted to send something to me and put a little holographic character on my hands right now.
55:45
Speaker 2
You would need to know that, and you would need to have access to all those subdomains in order to do it. Just like if I wanted to send a file to some subsub domain on a website or link it to a subsub domain on another website. So the domain hierarchy is structured this way. Some things are public. Some areas and regions are public. Others are private. The laws of the land then dictate what can or cannot be done with those. I can say whatever I like about my levels of autonomy, but if I tried to start my own police force here in Los Angeles, probably wouldn’t go that well because the authorities of the region actually have power to exert, and that is apparently the will of the people. And so in a democratic society, you have representational democracies that determine that.
56:31
Speaker 2
In virtual worlds, you can have entire autocracies. Like, Vader can kill anyone he likes on the Death Star. Apparently, no punishment, not a problem. Completely legal. Right? And so you need to be able to accommodate structures that are inevitably infinitely nested, but where hierarchies and laws and policies and rules have cascading capabilities and where permissioning becomes sort of a fundamental part of that. So that you can have I have a right to own my property and determine what happens on my property to a large degree. But Vader can still kill people on the Death Star. So that’s kind of a broad outline of the domain space.
57:23
Speaker 1
Okay, so let’s transition from that, then, a little bit into how AI is going to play into this spatial web structure. Maybe let’s explain a little bit about what makes the AI, the active inference AI so different than the generative models, the large language models, the whole idea that they’re being trained on big data and in the spatial web, it’s small amounts of data that become smart that actually power the AI. And how is that distinction just so important, especially given that the data that those big models are being trained on is only the public data, given that most of the data is not even exposed. Maybe talk a little bit about that.
58:22
Speaker 2
So if you go back to the age before personal computing, you have computers that are the sizes of an entire room, very large room. You’ve got a team full of scientists that are using different parts of that computer. Essentially, it takes a team of PhDs to operate a single computer. The vision that Steve Jobs had and others of his era was that power shouldn’t only be in the hands of large companies and technologies that were essentially renting that service, that everyone should be able to have their own computing. You got to think how radical the idea of personal computing is in, like, 1975, right?
59:03
Speaker 1
Yeah.
59:04
Speaker 2
So you have all these hobbyists that are kind of messing around and Park Xerox kind of comes out with the GUI, and Steve and a bunch of others figured this out, and they launch Apple in 1977. The idea was, let’s put the power of computing into the hands of individuals. Let’s give them that bicycle for their mind, right? If you kind of fast forward all the way through to the smartphone era, which I like to think of just an extension of personal computing. You go from a room full of PhDs operating one computer to a computer that’s a billion times, let’s say faster and cheaper, that like a two year old can operate and go on know, accidentally buy something from Amazon you didn’t want. So that arc of that power of computing in a single generation is unbelievable, actually.
59:56
Speaker 2
And yet that’s exactly what just happened. Right now, we think, through the lens of the current state of artificial intelligence. It’s basically that kind of IBM state. In order to make one, you need a bunch of data scientists in a room for several years building an AI that AI has to learn on millions and millions of examples of information and essentially is taking a statistical outcome, saying, okay, well, I’ve seen a million examples of this. I think this is the outcome. It’s proving to be very useful. And in the case of language understanding, it’s absolutely incredible what we’re seeing with Chat, GPT and the others in this sort of generative AI space. One of the limitations is that I ask it for an output by giving it an input.
01:00:45
Speaker 2
So I give it some text and say, hey, can you generate an essay for me about what summarizes the Ape Harry Potter movie? And it’ll do that in a matter of seconds. So I give it an input, gives it an output, it gives me an output. Now what it can’t do is adapt or update its model, right? That has to go back to the room filled with data scientists, and they need to spend another two years retraining it, putting raw materials in one side, running it through the factory, saying, that’s not the car I wanted. They literally just tell it what the answer is until it starts to figure out. And then it basically kind of statistically solves out. The reason Teslas routinely crash into things that no 17 year old kid would is because they don’t have a hierarchical or structural understanding.
01:01:37
Speaker 2
They’ve basically inferred the relationships between things. But that relationship is an implicit understanding, not an explicit one. If you use HSML, you can explicitly model things. It doesn’t take any big data. It’s like, this costs penny not like $10 million. You just give it the information and HSNL will automatically structure it. However, the data came in unstructured structured in any number of dimensions, from temperature sensors, from databases, all of your emails.
01:02:11
Speaker 1
So it really creates a mindfulness within the AI, right? Like it’s deliberate versus just more probabilistic.
01:02:21
Speaker 2
That’s right. And so you have now this explicit model, which is also explainable and transparent, and a human can understand it, as opposed to what you get with, say, GPT, which is like it’s like having 100 billion cell excel spreadsheet with a bunch of joins that the machine has made. And then you’re supposed to figure out how to take the nazi references out of this thing, which is why it routinely fails at that, because you can’t find all the nazi in the model. Right. It’s connected too many parts, and no human brain can do it. It’s an abacus the size of the moon. You’re not going to be able to get in there and move all the parts around or be able to make sense of it anymore. So with active inference.
01:02:59
Speaker 2
If you take the combination of explicit knowledge modeling, which is what HSML enables, essentially the ability to create, like, a knowledge base where you have human level understanding built into the representations of the data and all the relationships their causal relationships are predefined then you have something like an active inference agent which is completely different than the current state of the art AI in that what it likes to do is take some. Understanding of the world, and then have a belief about how that understanding will work in the world. And then go test it and see if it works. And then either, great, checkbox my model is correct or my model is wrong, and I update it. So now, instead of taking a million examples, you can just go like, here’s this model of the world.
01:03:42
Speaker 2
The part of the world that you need to understand or know or operate on could be my personal data. It could be all the information in a warehouse because you’re a robot. It could be a medical robot or AI or whatever. And then it essentially has this model. You can see it and see whether or not you think it’s right. Then it’ll basically go try it out in the world, and then it will update its model. So it adapts and grows and learns in real time, as opposed to these current AI switch don’t learn in real time at all and are incapable. You cannot add one bit of information into that model. The day GPT Four comes out, which was like yesterday or the day before yeah, yesterday it’s baked and it’s not learning anything new.
01:04:32
Speaker 2
And in fact, the irony is if they went back and everyone thought that GPT-3 said that it has limited knowledge of the world past September 2021 because they trained it on the past, well, lots happened since September 21, but it doesn’t know that. And so everyone thought, well, when they go do GPT Four, they’re going to include the last couple of years. And when they came out yesterday, they were like, no, we didn’t do that at all. It’s using the same data set, but we just got better at training it. So it’s understanding it’s better, and it is better. But is that good enough? Now? In many circumstances, it will be. And there are many circumstances, especially where AI is touching the world, we’re touching our data. We’re interacting with us and our children in our homes and our streets and our schools, right?
01:05:14
Speaker 2
In law enforcement, where that’s not good enough. And having a big black box trained by a bunch of people that we can’t see how they did it or what those outputs are is not going to be good enough. It’s not going to pass the sniff test. It’s not going to pass the regulator’s test. It’s not safe enough to put inside of a car, in my opinion. And so the approach that we’re taking with this explicit knowledge modeling language and active inference is then to give the power of that type of hyper personalized intelligent computing to everybody and then let them. Because there’s this lingua franca, not like a black box model where it only kind of understands its own internal language and can’t work with other AIS.
01:05:54
Speaker 2
Ours has an interoperable open and explainable and even governable language that allows a network of many different intelligent agents to operate and work together on our behalf. So it’s the same thing we saw with the personal computing era. There’s an industrial process that basically had the industrial scale computing. The PC comes out, we all get the power of the computing. The World Wide Web comes out. Now we can connect the power of all those computers and collectively compute together. This is the same process all over again. We have this generative AI. We’re coming out with what we think is a different approach we call regenerative AIS that can update their own model of the world. And then the spatial web and the protocols connect us and those agents and the power of that technology to basically transform ourselves and the world around us.
01:06:44
Speaker 2
Denise, thank you very much. This has been really wonderful. Let’s find out what that plant is and we can pick up from there.
01:06:50
Speaker 1
A special thank you to Gabriel Rene for being here with us today. It was a thrill to be able to sit down and have that discussion with you. And thank you to all of you for being here along with me on this ride. It’s going to be fun. We have a lot more coming, a lot in store. If you’d like to learn more about the spatial web and active inference AI, be sure to check out my blog, deniseholt.us. And there’s a playlist specifically for learning within our podcast here, and it’s called The Knowledge Bank Playlist. So all of the articles that I’ve written on my blog, they’re in audio form on that playlist. So if you don’t have time to read the article, you can multitask and listen and still get up to speed with the information. So I hope you find it helpful.
01:07:46
Speaker 1
And thanks again for joining, and I’ll look forward to seeing you next time. All right, take care.
VERSES AI 's New Genius™ Platform Delivers Far More Performance than Open AI's Most Advanced Model at a Fraction of…
Go behind the scenes of Genius —with probabilistic models, explainable and self-organizing AI, able to reason, plan and adapt under…
By understanding and adopting Active Inference AI, enterprises can overcome the limitations of deep learning models, unlocking smarter, more responsive,…
The initiative between VERSES AI and Analog in Abu Dhabi isn't just about technology - it's about reinventing urban life. By incorporating…
"We're optimistic that RGMs are a strong contender for replacing deep learning, reinforcement learning, and generative AI." - Hari Thiruvengada, CPO, VERSES…
VERSES AI is transforming AI, shifting us towards First Principles AI, Embodied AI, and Decentralized AI, distributing AI compute to…
Updated - Over 150 New Entries for 2024 Q2 - Research papers, articles, videos, who to follow, and more! The…
VERSES AI has unveiled a groundbreaking framework that "sets the stage" for ASI, and could have profound implications for the…
In a panel hosted by the Financial Times at the World Economic Forum in Davos, Switzerland, two of the biggest…