Cosmic Intelligence

The Better AI Gets, the Further We Seem from AGI

Chad Woodford Season 4 Episode 4

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 24:43

Let's take a grounded look at the state of the AI industry in early 2026 and ask whether we’re actually any closer to artificial general intelligence (AGI) or superintelligence than we were a few years ago. Despite massive valuations for companies like OpenAI and bold promises from AI lab leaders, today’s systems still struggle with hallucinations, common sense, and a genuine understanding of the world.

So join me as I revisit core assumptions behind current AI approaches—especially the ideas that the mind is computable and that scaling up large language models is enough to “solve” intelligence, and why many researchers are now pivoting from the “age of scaling” to an “age of research” into the nature of intelligence itself.

What happens to AI company valuations if superintelligence remains out of reach for the foreseeable future?  

And how should we rethink intelligence beyond language, code, and computation?

BREAKING: Demis Hassabis of Google Deepmind now agrees that LLMs are a dead-end on the road to AGI 

Substack version of this episode

My 2024 deep dive into the impediments to AGI

What non-ordinary states of consciousness tell us about intelligence

Ilya Sutskever on Dwarkesh 

The LLM memorization crisis 

On the Tenuous Relationship between Language and Intelligence

Gary Marcus on The Real Eisman (of Big Short fame)

Fei-Fei Li’s World Labs: https://www.worldlabs.ai/blog

Support the show

Join Chad's newsletter to learn about all new offerings, courses, trainings, and retreats.

Finally, you can support the podcast here.

As we enter the new year, I thought it'd be helpful to kind of take a survey of where the AI industry is in terms of its ambitious goal to reach artificial general intelligence or super intelligence. After all, Sam Altman especially has been promising us AGI or something like it for about two or three years now, and it's 2020 26 and we're not any closer, I think, to super intelligence. So I want to talk about why that is. A couple years ago, I actually put together a very comprehensive deep dive, like hour long YouTube video about all the different reasons that we're not going to reach AGI anytime soon based on our current technology. So I'll drop that in the show notes. But I just wanted to revisit some of those things I talked about then and see where we are today. Those things include hallucination problems, you know, making stuff up, this lack of common sense. And, like, what common sense might actually be, a failure to understand cause and effect and kind of how the world works in general. These are all some of the obstacles that I think are really important to talk about as we talk about potentially creating like human intelligence in a machine. Before we dive in, though, I just want to highlight a couple like really interesting tidbits. Open, AI seems to be valued at around $830 billion keep in mind that normal software companies typically trade at like eight or 12 times their revenue. So with about 20 billion in annual revenue, open, AI is basically valued at like, 41.5 times its revenue. And that's a huge markup, and that's all tied to their promises of creating super intelligence or AGI, or, like, disrupting the entire lifestyle of humanity, you know? So, so that's why I think this is so important, because these are big promises and they have a big impact on all of us, essentially, in other words, this AI bubble is all about AGI or super intelligence. And sure, these companies are starting to try to pivot towards like video creation or creating digital employees, agentic AI and all that stuff. But the valuations, I think, are still kind of founded upon these big promises around super intelligence, honestly, if these AI labs achieve their goals, it's going to change the way we live entirely. And so I think we're all kind of being pretty chill about it, right? I mean, it's like, it's a crazy time to be facing the idea that potentially, our our way of living will be disrupted by AI and you know, that's on top of the rise of fascism and all the other stuff that's going on in the world. So, yeah, it's a lot. And I just want to, like, I want to, like, try to ground us a little bit in the reality of our current technology so at least maybe we can relax a little bit. I don't know. And by the way, isn't it funny how we've all kind of accepted the idea that computer scientists are experts on the topic of human intelligence, like, what qualifies a person trained in writing computer code to opine on the nature of human intelligence? Back in the 90s, when I was an AI researcher and a computer scientist, I thought this way too. I felt qualified to know what intelligence is, despite having no training in cognitive science or philosophy, although I do now have a master's in philosophy, cosmology and consciousness. Anyways, I think the reason that we've all accepted this has something to do with this long standing assumption in our culture that the mind and the rest of nature, for that matter, are computable, meaning that they're just like machines, basically, but there's no indication that they actually are. It's just an assumption. It's just a theory. Okay, let's look at the state of the AI industry today. So last year, the AI industry moved from these Chad bot interfaces, like Chad GPT to these reasoning models where the model is sort of talking to itself in order to provide more reliable but also more elaborate and coherent responses. The industry also kind of moved into agentic AI, where there's this idea that these things will go out and kind of perform actions on your behalf. And that's the whole idea behind replacing employees and all that. But it hasn't really borne fruit yet, so we'll see how that goes this year. But another thing that happened last year that was really interesting is that the industry started to acknowledge the need for new approaches to AI altogether, and this included the need for World models and neuro symbolic hybridization, and even basic research into the nature of intelligence. So we'll get into all that in a second long time. AGI skeptics like Gary Marcus and more recent converts like Jan LeCun, who until recently was the chief AI scientist at meta, have argued that large language models are not sufficient to get us to AGI or super intelligence, as we've seen the state of AI at the end of 2025 kind of bears this out. There's no indication that we're anywhere close to just these things. Keep in mind, the current approaches to AI, which is called machine learning, assumes that the mind and therefore intelligence arises from the firing of neurons in the brain. So based on this foundation, models like Chad, GPT and Claude and Gemini are these enormous software simulations of neural networks that a. Attempt to recreate the way the brain works, but in silicon, of course, the thing is, though nobody really knows for sure that their mind or consciousness arises from the brain, it's just a theory. So these models are kind of founded upon an assumption, and I think for that reason alone, it's not clear to me that they'll get to any kind of intelligence anytime soon. Nevertheless, AI, developers are confident that they have basically solved intelligence, and they just need bigger, faster machines and more data and more money. Of course, in other words, it's just a question of scaling to these people. They have this attitude that, like, you know again, that the brain is computable and that intelligence is like, no big deal, right? It's just, we're going to crack it. It's just around the corner. But what's interesting, as I alluded to earlier, is that people are starting to question this. So Ilya Sutzkever recently told dwarkis Patel that the AI industry is moving from the age of scaling to the age of research, meaning research into what intelligence actually is. I find this hilarious, because if you're trying to create human intelligence, or something beyond human intelligence, shouldn't the starting point have been to understand what intelligence is anyways. This is, this is still promising, at least there's some acknowledgement here. It's a good reminder, though, that one of the major themes of AI research going back to its inception in the 1950s is this unbridled hubris. It's like intelligence is not that hard again, but intelligence is hard, you know, like, there's this thing called more of X paradox, right? Where AI models excel at kind of PhD level intelligence tests, but they fail at the most basic tasks. They can't count the number of hours in the word strawberry. They can't fold a t shirt or identify a wet spot in a photo or a video. They don't have object permanence, meaning they don't understand that things in the world sort of persist. It's a good reminder that today's models are not actually reasoning. Instead, they are performing multiplication on very large sets of numbers to produce new numbers that are then translated back to text, images, sound or video. In contrast, when you or I solve a Rubik's Cube, for example, you know, we're thinking spatially in terms of, there's colored squares on a piece of plastic in your hand, or when you fold a t shirt you're you're not thinking about it at all. You're simply just allowing your hands to perform a series of motions involving interactions with cloth that reflect a lifetime of folding and interacting with clothes. It's an embodied knowing that is a function of your having a body and having a lived experience. So there's something there that I think is kind of hard to capture in software on silicon, just running math. Basically, these AI models exhibit a kind of thin and brittle intelligence, one that's founded upon in human language, but not so much language as chunks of words reduced to numerical values. It's a number crunching process in which these models, as researchers have recently shown, are simply memorizing text and then remixing it in novel ways. So it's it's like the simulation of intelligence. It's not really intelligence. In contrast, humans exude and are permeated by a rich spectrum of intelligences, most of which we don't even think about that much or understand. For that matter, for example, there's emotional intelligence, embodied intelligence and wisdom, and, you know, all these other imagination and intuition. And I want to explore all these different intelligences in future episodes. But for now, let's, let's just stay grounded in the state of the AI industry. So the first challenge to intelligence, I think, is this persistent hallucination problem today's AI models notoriously hallucinate or make shit up. Now I'm not a fan of the of the word hallucination in this context. I think hallucination is what human minds do as a result of their mysterious phenomenological nature. So it's an ill fitting metaphor, I think. But, you know, it's become standard nomenclature, so I'll use it anyways. As we've seen, AI models don't like know facts. They only recognize patterns in text, imagery or audio. Chad GPT doesn't know, for example, that Paris is the capital of France when it responds to the question, What is the capital of France? It is saying Paris, because most of the time it has come across a sentence about France and its training data, and Paris has been named as the capital. Another way to say this is that AI models don't store information in a file or a database. Instead, they store a numerical approximation of that information, and then attempt to approximately reconstruct when asked. AI systems are also designed to please the user, so they may agree with the incorrect assertion in order to satisfy the user. So for example, if you ask when Al Gore invented the internet, Chad GPT may reply, Al Gore. Or invented the internet in 1993 but he didn't. This is a long standing joke, and AI models don't always understand irony or humor. Fortunately, these models have improved slightly in terms of their factual reliability, not because they understand the world or have common sense, but because the AI labs have grounded them in reliable sources of information, and they are hooked up to the specialized tools for performing specific tasks. And of course, intelligent humans also lie and get facts wrong, so maybe expecting more from AI systems is unfair. After all, the ability to discern truth from fiction is more of a function of wisdom than intelligence. We'll come back to the question of wisdom in a future episode. Okay, so that's hallucination. The second obstacle, I think, is this kind of mysterious thing called common sense, right? It's a big topic that leads into a number of other topics that's well beyond the scope of this episode, but common sense, though, has something to do with moving through the world in a body with a nervous system, and all the knowledge that arises from that, including an intuitive understanding of physics and quantities and object behavior and theory of mind, you know what people may be thinking. So this amorphous nature of common sense is a reminder that intelligence isn't a calculation, it's a dynamic relationship with the physical and social world. Also, the thing about common sense is that it's largely the knowledge that humans have about the world. It's so obvious that we don't really write it down so it can't be fed into these systems as training data. Okay? So that brings us to the topic of language. When you stop and think about it, it seems odd that leading AI researchers and cognitive scientists see language as the basic building blocks of intelligence, despite the fact that humans developed language primarily to communicate, not to think, and has happened relatively recently. I mean, like 50,000 years ago or so, and intelligence is much older and deeper than language, in the words of a neuroscientist, a linguist and a cognitive scientist, and I know that sounds like a setup for a joke, but it's not. A couple years ago, these scientists published a commentary in the journal Nature where they explored the relationship between language and intelligence, and they found that although language facilitates the transmission of cultural knowledge. Such linguistic capacity merely reflects the pre existing sophistication of the human mind. Language does not constitute or undergird intelligence, although we often do think using language. Language appears to have arisen and then evolved for the purpose of communication, not thought. In that sense, language reflects the sophistication of human thought, rather than being a fundamental building block of it, neuroscience has shown that reason is largely independent of language. We humans use language to express our internal thought process and mental state. We often think in terms of language, but Language and Thought are not synonymous. We just live in a word obsessed culture. We can verify this scientifically by looking at FMRIs of people solving math problems or engaging in theory of mind, the non linguistic areas of the brain light up in those cases. In other words, the brain's language network is distinct from networks that support thinking and reasoning. In fact, as the authors of that nature commentary point out, people who have lost their linguistic capacities as a result of brain damage, a condition known as aphasia, are still able to reason, solve math problems, understand the motivation of others and make complex plans. You can actually verify this yourself by observing a toddler or an animal. They can obviously reason about the world and make plans without using language. So according to that Nature article, language is not only unnecessary for thought, but it's insufficient for thought. There are brain disorders, for example, where the person can sound quite articulate, while the ability to reason is impaired, as we see with these large language models or these large multimodal models, a facility with language does not necessarily reflect actual comprehension, but it does create a believable illusion, because in our society, being articulate has been a proxy for intelligence. Of course, we can point to many public intellectuals who are widely considered highly articulate, yet do not appear to evidence much intelligence. In short, being articulate is an unreliable signal for intelligence. Language is not thinking, okay, so it seems like there's a tenuous relationship between language and intelligence, which kind of undercuts the entire endeavor that these folks are engaged in. Really, okay? So one attempt to address the pure language critique and take a small step towards developing common sense is something called World models, or spatial intelligence. Many of the AI training challenges seem to arise from a general lack of embodiment, especially in the AI training obviously disembodied algorithms trained on language and media alone will never learn to navigate or understand the world. At the very least. They will need to interact with and experience the world. But training robots in the real world is expensive, slow and dangerous, so the current approaches to world models and spatial intelligence simulate robots in a simulated environment. So one example of this is this project called Jeppe or jepa from Yann LeCun, the former chief AI scientist at meta, this project is training on millions of hours of YouTube videos combined with a small amount of robot interaction data. And the idea there is that these models will be able to develop an understanding of physics and cause and effect and that sort of thing just from watching YouTube. Basically. I'm not so sure this is going to work, but we'll see this year, hopefully. Another relatively new attempt at World models and spatial intelligence is the startup world labs, from Ai scientist Fei Fei Li. They launched a multimodal world model late last year called marble that creates a consistent 3d simulation with its own internal physics, but again, it's a simulation, and so maybe it's a step in the right direction, but I'm still not convinced. Maybe the most promising area of research here is that Google has been doing research into using robotics to train world models, although so far, they're just these kind of table top robot arms that are just moving objects. So I'm not sure how much use that would be, more broadly, as I argue in my forthcoming book, and we'll discuss in future episodes, intelligence is embodied to a greater extent than we think. Having five senses that are connected to a biological central nervous system may be essential for human intelligence. I think it's possible that, from an evolutionary and even metaphysical standpoint, smell, taste and touch are not superfluous niceties, but are an integral part of the package. So although these world models might go part of the way towards helping these AI models understand cause and effect and physics and all that, I don't think they're going to get us anywhere near AGI or super intelligence. Okay, so maybe world models are a small step in the right direction, but there's another piece to intelligence that I want to talk about that isn't really that widely discussed, and that is abductive reasoning, or abduction current AI systems like Chad, GPT or Gemini are modeled primarily on one kind of human intelligence, inductive reasoning. This is a form of bottom up reasoning that draws conclusions from specific observations. For example, if you've only ever seen white swans, you would conclude that all swans are white. That's a form of inductive reasoning. You're drawing conclusions from patterns. Basically, in the olden days of AI, in the last century, AI systems were modeled on deductive reasoning, which is more of a top down approach that starts with general statements about the world and applies them to specific instances. To use a famous example, you might say that all men are mortal, and then if Socrates is a man, we know he is mortal. Gary Marcus has long argued that AGI will require a hybrid approach that combines the neural network machine learning that we're relying on today, which is induction, with the more top down symbolic reasoning of older AI systems deduction to create what's known as neuro symbolic AI, Google DeepMind has a long history of using these hybrid approaches to machine learning in specific applications like AlphaGo, and they have been applying a more explicitly neuro symbolic approach to domains like mathematics with alpha geometry and alpha proof. So this neuro symbolic hybrid approach is an attempt to model the two modes of thinking that humans engage in, deductive reasoning and inductive reasoning. But there is this third, more mysterious kind of reasoning called abductive reasoning, that's much harder to model, but it may be crucial to achieving true superintelligence. Abductive reasoning is defined as an inference to the best possible explanation. It's the intellectual leap that detectives and doctors make from a set of facts to a plausible explanation. A detective is not thinking logically, step by step through the problem, examining every possibility, the explanation simply emerges wholesale in his mind. The quintessential example is a is a Sherlock Holmes story, right? For example, in the adventure of the noble bachelor, Sherlock Holmes is able to locate a missing woman who disappeared at her wedding just from a few conversations a hotel receipt, and based solely on his understanding of human behavior and knowledge about what people generally do. This, abductive reasoning seems to combine common sense knowledge of the world and some amount of creativity and intuition. It also requires both induction and deduction, induction to identify patterns, and deduction to test the hypothesis. So although nobody knows how abductive reasoning works. Some AI researchers think that having a reliable world model is one step toward being able to reason to the best possible explanation. For example, an AI system would more reliably be able to reason that because a glass is broken, it probably fell off the table. Cognitive scientist and philosopher Jerry Fodor thought that in order to perform abductive reasoning, a system would have to draw on everything it knows about the world, because this seems to happen in an instant for humans. It's not clear how to model that in a machine. Philosopher Charles Sanders Peirce, who first studied formalized and named the concept of abduction, ultimately concluded that abduction is some sort of instinct, calling it the light of nature, but I'm not sure that's exactly right. I think it may arise from the same mysterious imaginal realm from which creativity arises. So we'll come back to that in my episode about imagination. The mysterious nature of abductive reasoning in particular is why cognitive scientists and philosophers refer to it as the Dark Matter of intelligence. Nobody really understands it basically. And yet, the major AI labs pretend as if it doesn't exist, that induction is all there is to intelligence. If world models are any kind of solution here, I will be watching the work of Fei Fei Li and Jan le Koon very closely this year. You know, if you want to learn more about abductive reasoning, I actually produced a whole video a couple years ago about how non ordinary states of consciousness can give us a clue to abductions origin in the mind. So I'll drop a link to that in the show notes as well. In conclusion, the more time that passes since Chad GBT exploded onto the scene and AI CEO started promising the imminent arrival of AGI or super intelligence, the more it becomes apparent that intelligence is deep and mysterious and that we are nowhere close to any kind of super intelligence. So given all these impediments and obstacles to achieving AGI or super intelligence, it's it's strange to me that the AI companies are still so confident that they can do it, and it concerns me because this disconnect, I think, is why the AI bubble is going to burst. Sometime soon. We'll see what happens, but that's my prediction, if we're still a long ways off from AGI or super intelligence, what does that mean for the valuations of these AI companies, like open AI or Nvidia. What does it mean for the economy and for humanity? I'll leave you with those questions to ponder until next time. Next time I would like to talk about imagination, but we'll see what happens. We'll see what happens. All right, thanks for listening. I'll see you Next time you you you.

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

Hard Fork Artwork

Hard Fork

The New York Times
Weird Studies Artwork

Weird Studies

SpectreVision Radio
Your Undivided Attention Artwork

Your Undivided Attention

The Center for Humane Technology, Tristan Harris, Daniel Barcay and Aza Raskin