Cosmic Intelligence

Impediments to Creating Artificial General Intelligence (AGI)

Chad Jayadev Woodford Season 3 Episode 5

Send us a text

Artificial general intelligence, or superintelligence, is not right around the corner like AI companies want you to believe, and that's because intelligence is really hard.

Major AI companies like OpenAI and Anthropic (as well as Ilya Sutskever’s new company) have the explicit goal of creating artificial general intelligence (AGI), and claim to be very close to doing so using technology that doesn’t seem capable of getting us there.

So let's talk about intelligence, both human and artificial.

What is artificial intelligence? What is intelligence? Are we going to be replaced or killed by superintelligence robots? Are we on the precipice of a techno-utopia, or some kind of singularity?

These are the questions I explore, to try to offer a layman’s overview of why we’re far away from AGI and superintelligence. Among other things, I highlight the limitations of current AI systems, including their lack of trustworthiness, reliance on bottom-up machine learning, and inability to provide true reasoning and common sense. I also introduce abductive inference, a rarely discussed type of reasoning.

Why do smart people want us to think that they’ve solved intelligence when they are smart enough to know they haven’t? Keep that question in mind as we go.

YouTube version originally recorded July 1, 2024....

Support Chad

James Bridle’s Ways of Being (book)
Ezra Klein’s comments on AI & capitalism
How LLMs work

Gary Marcus on the limits of AGI
More on induction and abduction
NYTimes deep dive into AI data harvesting
Sam Altman acknowledging that they’ve reached the limits of LLMs
Mira Murati saying the same thing last month
Google’s embarrassing AI search experience

AI Explained’s perspective on AGI
LLMs Can’t Plan paper
Paper on using LLMs to tackle abduction
ChatGPT is Bullshit paper
Philosophize This on nostalgia and pastiche

Please leave a comment with your thoughts, and anything I might have missed or gotten wrong. More about me over here

Support the show

Want to talk about how Chad can assist you with your own transformation?
Book a free consultation!

Join Chad's newsletter to learn about all new offerings, courses, trainings, and retreats.

Finally, you can support the podcast here.

What is artificial intelligence? I mean, what is intelligence? And are we going to be replaced or killed by super intelligent robots in four years? In this episode, I want to answer some of these questions and explain why I think we are quite far from achieving Artificial General Intelligence, AGI or super intelligence. Although I touch on philosophy of mind, cognitive science and computer science a little bit, I did keep it fairly high level so that a general audience could follow it. As you'll hear some of the missing pieces and current systems include a lack of factual grounding, a lack of common sense, and a lack of understanding of cause and effect. Without the ability to reason using abstract concepts. Today's AI systems cannot achieve human level intelligence. So I'll get into all of this and more in the episode. I hope you find it educational and interesting. And by the way, I recorded this as a YouTube video originally, but because it got so much engagement, I thought I would share it out to my podcast for you too. So here's the episode. Welcome to cosmic intelligence, a Podcast where we explore the intersection of philosophy, cosmology, consciousness, and emerging technologies like artificial intelligence. If you're new here, I'm Chad, a philosopher, technologist, product manager, yoga teacher and attorney based in Los Angeles. Artificial General Intelligence or super intelligence is not right around the corner like AI companies want you to believe. And that's because intelligence is really hard major AI companies like open AI and anthropic as well as Ilya Sutskever, his new company, have the explicit goal of creating artificial general intelligence. And they claim to be very close to doing this using technology that doesn't seem capable of getting us there. So today, I want to talk about intelligence, both human and artificial. Before I jump in, let me just say that I am optimistic about the promise of technology. And I'm also skeptical about the outlandish claims that are being made right now about AI and general intelligence. Now it's it's been a while since I've been an AI researcher, so I may get some of the details wrong in my explanation of the current state of AI. So if I do, please feel free to comment on that or give me feedback. Send me an email. Do you ever think about intelligence because I sure do. Merci. Merci. Merci. matchy matchy matchy. Merci. Merci. Merci. What is artificial intelligence? I mean, like, what is intelligence? Am I intelligence? Is AGI or super intelligence right around the corner? Are we on the precipice of a techno utopia or some kind of singularity? These are the questions that I want to answer today, in a hopefully, very non technical, but technically accurate video, I want to try to offer a layman's overview of why I think we're so far away from AGI and super intelligence, but I also want to speak accurately. So it's a delicate balance. I want to keep this interesting, and I want it to be useful to the broadest audience. So we'll see how it goes. And just to be clear, I'm trying to avoid talking about machine sentience or consciousness. In this video, I'm just focusing on cognitive intelligence, or at least I'm going to try to. So I want you to keep this question in mind as we go. Why do smart people want us to think that they've solved intelligence when they're smart enough to know that they haven't. So over promising and under delivering is a sort of artificial intelligence tradition. For example, in 1965, ai pioneer Herb Simon said that machines will be capable within 20 years of doing any work that a man can do. And that would have been in 1985, which clearly didn't happen. Five years later, in 1970, we see computer scientists and Marvin Minsky promising that in from three to eight years, we will have a machine with the general intelligence of an average human being didn't happen. Then, in 2005, Ray Kurzweil predicted that we would have the requisite hardware to emulate human intelligence by the end of that decade and effective software models of human intelligence by the mid 2020s. Here we are. And I think it's pretty clear that we don't have that yet. Actually, here's the here's Ray last week standing by his prediction that we will have AGI within the next five years in 1999. I made a prediction that we reach AGI Artificial General Intelligence by 2029. Within 30 years, you think we're still on track for 2029? I think that's probably pessimistic. You keep pushing this up? Well, Elon Musk says it's going to be two years. Other people saying three or four years five years is probably conservative, Sundar Pichai, the CEO of Google and alphabet said way back in 2018, Ai is one of the most important things humanity is working on. It's more profound, and I don't know electricity or fire. This kind of bold statement might be warranted if we were close to any kind of truly intelligent machine. And Elon Musk, of course, has been claiming every year for the past 10 years that fully autonomous Tesla's were coming within a year, every year. And they're still not here. Finally, earlier this month, Ilya Sutskever, one of the cofounders of open AI, revealed his new startup safe super intelligence Incorporated, a startup with one goal and one product, creating a safe and powerful AGI system. He says that they won't release a product until they achieve super intelligence. Although we may be inching closer to solving intelligence, it's still a ways off. And I think we don't fully understand intelligence. So that's part of what I want to talk about today, you can see why I'm skeptical. I mean, there's a reason to be skeptical after so many decades of of empty promises. And that's without it without even getting into the details that I want to talk about today. For those of you who are new to my channel, in addition to being a philosopher, and attorney, I'm also a computer engineer and a former AI researcher. So I do understand this stuff, at a certain level, pretty deep level. But it's been a while since I've actively worked in the field. So again, if I get some details wrong, please let me know. I actually left the field of AI back in the 90s. Because I feel like it didn't live up to the promises and the height back then it was the AI winter as they called it. So it seems like something new and different is happening. Now it feels that way. And so that's why I'm excited about what's happening, but also want to talk about it more. And to be clear, I think we will eventually get to some kind of machine intelligence that is impressive, and reliable and transparent, and unbiased and all that. But I just don't think the current approaches are what we need to get us there. Whatever the potential of AI is, I want to offer a an accurate, realistic snapshot of the present moment. So I want to use this video, this entire video series I'm working on actually, primarily to offer a more skeptical perspective around all the grandiose claims that are being made right now, and to counteract all the hype, or at least offer some clarity about it. So Apple, as I'm sure you saw a few weeks ago came out with their Apple intelligence. What's great about this, I think, is that it's a more modest use of AI for aesthetic purposes and practical personal uses. And I think it's the first real attempt outside of maybe perplexity at grounding answers and responses in some kind of actual reality. And what's nice about their approach to it is they have no aspirations to create some kind of super intelligence. Okay, so what is artificial general intelligence? Let's start with that. One challenge here is that there's no one agreed upon definition, maybe you can say that it's performing any task that a human can, including tasks not encountered before, it's pretty good. It's the it's the ability to think and act intelligently in any domain like we like we humans can. But a crucial part of it is the ability to adapt and learn, which is something I want to come back to you later. Okay, so that's sort of a rough approximation of AGI. Actually, the artist James bridle argues that AGI is already here in the form of corporations and what he calls corporate AI or corporate intelligence. And I think that's a it's a pretty interesting perspective. Here's, here's James, but the answer I quite often give when people ask me when is when is real AI going to turn up? I say really, AI is already here. It's corporations. Corporations are large assemblages of people and machinery and infrastructure, connected to networks. They respond to pleasure in pain, pleasure is profit and loss is pain. They operate according to very narrow views about what it is that they should be doing. They stick to these very narrow goals, you know, almost regardless of anything else that happens. They have free speech. They have legal standing, they have legal personhood. They have power in courtrooms very hard to kill. Although not completely impossible to do. So there's this, they are essentially life forms that we've created, where we realized that having heard these kinds of organisms living amongst us, it's almost like living in the aftermath of a kind of alien invasion. And the bridle elaborates in his book ways of being. That's what happens, it would seem, when the development of AI is led primarily by venture funded technology companies, the definition of intelligence, which is framed, endorsed and ultimately constructed in machines is a profit seeking extractive one. And he makes an important point, a point is similar to the one that Ezra Klein made on his podcast a few months ago, that the way we think about AI and intelligence has largely been defined by its emergence inside of capitalism. And I would add to that, that its emergence out of government and military funding is also heavily influential on the way it's kind of being thought about and talked about. I want to offer one interesting little side note, too, I was reflecting on how today's AI systems kind of reflect our culture and our society the moment. I mean, isn't it interesting that if we look around in our society, there's a general lack of grounding and factual reality or independent verification. And nobody's working with this, nobody's working with the same set of facts. So it's fascinating that our AI systems kind of reflect that, like they hallucinate and they don't really have factual reliability. So that's interesting, I think. I mean, it's also it also reminds me that we live in this cultural age of nostalgia and pastiche. And in a way, our AI systems are essentially enormous mashup machines. So it's just I guess what I'm trying to say here is to paraphrase Marshall McLuhan, we shape our technology, and it in turn shapes us. So there's this feedback loop going on. And I'm just fascinated by the way the AI seems to reflect the state of our world right now. So that's interesting. Okay, so let's try to define intelligence, even more even more challenging, as we'll find out. If we're attempting to recreate human intelligence, and then surpass it. What does that look like? What like, what does it intelligence mean? Exactly? It seems obvious at first blush, but when you really dig into it gets more slippery. And if you look at technology leaders, and especially transhumanists, they seem to be obsessed with intelligence at the expense of all other attributes and facilities that life and especially humans exhibit. You can even say that our entire modern society is obsessed with intelligence, as little intelligence as we seem to be exhibiting and collectively at the moment. But yeah, I mean, if you if you think back on society and government structures, you could argue that our governments today in the west at least, are largely a meritocracy, or at least aspirationally. And you know, IQ tests and standardized testing determine your fate and life, at least if you ignore inherited wealth and privilege. So in our society, I think intelligence is over emphasized and overvalued. And I think that is kind of feeding the AI hype, and it's influencing our thinking about intelligence. Intelligence is hard. Defining is hard. There's a lot more going on with intelligence than AI researchers and developers acknowledge, you know, intelligence, I think is actually quite mysterious. And not only that, but uh, you know, I think very few AI developers are also trained in cognitive science or philosophy of mind. And this persistent yet ill fitting metaphor of the mind as a machine is a very misleading metaphor, I think. I mean, is the mind algorithmic? I don't know that it is. Early AI researchers certainly thought it was. And, you know, they really that's just really influenced the development of AI ever since actually touched on this last year in my video about quantum consciousness. So if you're curious about that, go check that out. Defining intelligence as mere computation means that we reject a much richer understanding of mind and consciousness. And I want to explore that particular topic in more depth in a future video, but let's actually try to define intelligence. It's obviously reasoning, analysis, logic, pretty clear that that's all intelligence, learning, the ability to learn the ability to plan. That's intelligence, obviously, nothing controversial so far. In fact, one dictionary definition I like is intelligence is the ability to acquire and apply knowledge and skills, very straightforward. But then does Intelligence include intuition, discernment, probably, but then wisdom is seems separate, but perhaps is on a continuum. And then what about like creativity? Actually, Ray Kurzweil has a pretty broad definition of intelligence in his in his books about AI. And he says that he human intelligence is diverse and subtle, which is a nice acknowledgement. And for him intelligence includes artistic aptitude and creativity. He even says emotional intelligence as part of his definition definition of intelligence, which is interesting. We'll come back to that. But does Intelligence require self awareness? Or metacognition, as they call it? Does it? Does it require sentience and consciousness? I don't know. You know, as as anthropocentric, as it is one definition of intelligence that I really like, is it's whatever humans can do. But then that includes all the faculties that I just mentioned, including, including creativity and imagination and intuition. So it's a little hard to define. So another answer to the question of what is intelligence is like, nobody really knows. Right? Or we think we know but it's, it's slippery. And I think it's possible that this machine intelligence that we're creating is like a wholly new thing. It's like a categorically new thing. It's not ever going to be quite human intelligence, it might be its own intelligence in the same way that like an octopus with its multiple brains and different parts of his body has its own kind of intelligence. That's not human intelligence, but it's certainly some kind of intelligence, right. So I think machine intelligence is possibly going to be some kind of souI, generous, holy, new thing that we're creating. And actually, Alan Turing, the infamous computer scientist, who was kind of like the godfather of AI in some way, he recognized back, you know, 70 years ago that machine intelligence would probably be different in kind than human intelligence. He thought that artificial intelligence might very well be a wholly new thing that is not analogous to any form of intelligence that we know today. And for that reason, I think, AI may never supplant humans, it might forever just kind of augment us and be a collaborator, which is more exciting. I think. The point I'm making is that understanding intelligence is important. If we're going to create an AI system, we need to understand exactly what we're trying to create, if we're going to create it right, it seems pretty obvious. But you know, I don't know that our understanding of intelligence and how it works is reflected in our current systems today. And if we don't understand what we're trying to create, I think what we create is going to end up being mysterious and unpredictable, biased, misaligned, and kind of opaque, even alien and maybe a little terrifying. I mean, we see this already, right, maybe you feel this way towards changing beauty, or Sora. So having said all that, is there such a thing as general intelligence, it's unclear. In any case, however, we define intelligence, there is still some use, I think, in creating systems that attempt to replicate even a narrow kind of intelligence. And there's definitely a big opportunity here. I'm not again, I'm not opposed to making these things, I just think we should be more thoughtful and deliberate about what it is we're making, and what's driving all that. Okay, so let's talk about the brain and the mind. In our modern times, the word brain and mind are used interchangeably. The brain is obviously that chunk of stuff in your skull, the gray matter. And the predominant view of mind is that it arises from the firing of neurons in the brain, the mind as well as our consciousness, essentially, emerges as an epiphenomenon of the brain. This hasn't been scientifically proven, actually. But it is a long standing assumption. And it is the prevailing view. So this is, you know, I'll explore that assumption in the future video. But this is the kind of view that is driving and has been driving AI development. So for this reason, computer scientists in the 1940s and 50s, suggested that we might model this neural network in the human brain in order to achieve human level intelligence and maybe even beyond. The neuroscientist Henry Markram actually announced back in 2009, at a tech conference that he was going to simulate the entirety of the human brain and a computer within the decade. So they literally are only a handful of equations that you need to simulate the activity of the neocortex. But what you're doing is a very big computer. And in fact, you need one laptop to do all the calculations just for one neuron. So you need 10,000 laptops. So where do you go, you go to IBM, and you get a supercomputer because they know how to take 10,000 laptops and put it into the size of the refrigerator. So now we have this Bluejeans supercomputer, we can load up all the neurons onto each one onto its processor and fire it up and see what happens take the magic carpet for a ride. Sadly, he never finished his project. If we gaze back over the history of philosophy of mind and cognitive science, there are basically two types of reasoning acknowledged and studied. There is deductive and inductive inference. Deductive inference is where you apply general principles to particular instances. For example, all men are mortal. Socrates is a man. Therefore, Socrates is mortal. Again, with deductive reasoning, we're going from the general to the specific. If it's raining, then the ground is wet, you see that it's raining, therefore, the ground is wet. Although it may not be because there's a tree or a gazebo, or a car, covering the ground in question. Anyways, this was the first approach to AI which are known as expert systems or classical AI. And the idea with this was that if if we could feed like millions upon millions of these kind of principles are true statements into a machine, boom, we get intelligence. But as you can see, with a rain example, it's brittle and can't really learn or adapt. So if you just have deduction, you know, you're it's, it's not going to really be able to handle the very fuzzy and messy world, right? It's also not well suited to learning and adapting deductive conclusions have to assume that the premises are true and valid for all time and all situations. It also suffers from relevance challenges, like when do you apply a particular truism to a given situation. So we can already see that intelligence has layers that there's like steps that happen before for example, deduction. So we'll come back to this in a moment, among other things, general principles that are used, and these deductive statements require testing in the laboratory of life. The principles rattle rattling around in your head are constantly subject to revision, or at least I hope they are. And if you know, for example, if you grew up in the northern hemisphere, you may have been told or learned that all swans are white. But then you travel to the southern hemisphere, and you see a swan that is brown. So, you know, these, these things are always subject to revision. And this actually brings us to the other kind of inference that we are pretty well aware of, which is inductive inference. And this is where you arrive at general conclusions from particular observations. So you're identifying patterns from examining data, observing the world and performing experiments. So to go back to swans, if you see white swan After White Swan, you probably would conclude that also all swans are white. And this is how machine learning works. You know, they study millions of photos labeled cat, and they start to have a general sense for what constitutes Katniss or at least mathematically they do. And so the basis for machine learning systems today is this inductive reasoning. They're trained on millions of samples to learn what for example, a cat or a dog or a human is. The challenge with inductive reasoning, is that unlike deduction, inductive knowledge is always provisional, right, the world always changes. And the future doesn't always look like the past. It also doesn't require knowledge of cause and effect. So these systems that learn from induction or machine learning, are inherently brittle, brittle as well, they can't adapt, and they can't handle a world that is complex and messy. And so there's also this kind of longtail problem in inductive reasoning, or machine learning where a specific situation that was not in the training data is going to trip up the machine. So in those cases, the machine is unable to reason towards a response, at least not with induction alone. So essentially, machine learning systems on their own can't extrapolate from prior experience. For example, an autonomous driving system trained in the United States would face numerous challenges. If it was deployed in India, it would encounter cows and the road and it would respond unpredictably. The traffic signs would obviously be different. And even putting language aside, there's gonna be Tuk Tuks and rickshaws and scooters with four people on them. And, you know, not to mention that in most places in India that I've been to at least, traffic rules are not really followed. So you know, it's it's kind of environment where the training data will not be helpful to the new situations. So it seems like we almost need another type of thinking, something that binds deduction and induction together. And as we will see, this is known as abductive inference. We'll explore that after we look at how current AI systems work in a little more detail. So current current AI systems or machine learning systems are kind of an improvement on the classical AI systems that I mentioned earlier. You recall, the classical AI systems were similar to deduction, right where they had knowledge kind of hard coded and these true statements and Decision trees and but you know, going back 70 years, there were some attempts at modeling the brain, as I mentioned, through neural networks, and so that's kind of experienced a resurgence. Going back maybe 15 years, people started to have enough data compute power to try to model these things more realistically. And so today AI systems that have captured the popular imagination and tragic beauty and all that are for the most part based on these neural networks on this theory of cognition that posits that intelligence and the mind arise from the firing of neurons in the brain. So in short, they're an attempt to model the brain. So that includes chat GPT, anthropic Claude and Google's Gemini, not to mention the image generators and all the other generative AI. with companies like Apple adding neural engines to their chips, we can see a lasting commitment to this machine learning approach, these machine learning systems are again a form of inductive reasoning. And they have finally realized some success after decades of failure because of a combination of access to larger datasets and specialized computer technology in the form of GPUs. So boiling it down. The way it works is that there's these tokens that are basically representing words or part of words that are assigned a numerical score. And then sentences or images are parsed and generated using these kind of large mathematical matrices of probability distributions that pass through multiple nodes. And these neural networks, that's probably the most concise like description of the way that these machine learning systems work today, it definitely glosses over a lot of detail. But that's essentially it's essentially using math to generate text and images. And so it's, it's just assigning numerical values to different pieces of information. This kind of system is very widely used and has been for some number of years. For example, Facebook uses machine learning to decide what to show you in your feed. Same thing with Instagram. Google uses it to recommend YouTube videos, maybe even this one. These companies use machine learning to translate between languages to show you ads ad targeting any any kind of text to speech or speech to text system is using machine learning. And you could describe machine learning as a bottom up approach. It's a bottom up approach to intelligence where information is gathered from observation, as opposed to top down intelligence, which incorporates prior knowledge about the world. Again, it's an inductive as opposed to deductive. Okay, so now we get to the heart of the matter, I want to talk about what's missing from current systems that I think you need to get to AGI. Again, I'm not talking about fairness, transparency, and other safety issues. I'm focused entirely on this in this video on the pieces that are missing if we're going to create true intelligence in a machine. Those other issues, bias and transparency and fairness and all that are also important. But that's a different topic. And this is more just focused on the question of general intelligence. So the first thing that is probably obvious to anybody right now is that factual accuracy is a real challenge with these systems. AI systems currently have persistent and seemingly intractable challenges with facts, including organizing facts and understanding relationship between facts. And this is because the way they're designed machine learning isn't really geared towards representing precise factual knowledge. It's more of a like, a mash up machine or like a amateur poet, you know what I mean? So, you know, we saw this recently, with Google telling people to eat rocks, and to put glue on their pizza briefly, there have been some attempts recently to address this factual issue. This includes something called retrieval augmentation generation or rag. And I'm gonna go into the details of it. But basically, the idea is that you kind of bolt bolt onto your machine learning system, some some reliable source of factual knowledge. And then that the idea there is that it will then allow the LLM or the machine learning system to to be more reliable. But the challenge with this is that it's just basically kind of kicking the can down the road aways in terms of achieving general intelligence, you would still have to create a reliable system that could decide for itself, what sources to trust. And you would ideally not want to have to hard code that every time you enter a new knowledge domain. One way to summarize what's missing from current systems is that they don't have common sense. To quote Gary Marcus, the computer scientist, the great irony of common sense, and indeed AI itself, is that it is stuff that everybody knows yet nobody seems to know what exactly it is, or how to build machines that have it. This is because the systems have no working model of the world internally. They only know what words tend to follow other words, or what sort of pieces of images tend to follow other pieces of images, that kind of thing. And so part of common sense is like understanding the world and understanding cause and effect. understanding cause and effect is arguably the most important missing piece. And it's deeply related to basic reasoning that goes beyond In mere induction, maybe we learn it in infancy. And maybe a machine learning system could learn us eventually through a large enough training set. But that seems unlikely. And I'm skeptical of that. As for humans, to echo, Immanuel Kant, we seem to have a natural intuitive sense for time, space and causality. And I have thoughts about why that might be. But I'll, I'll defer that to another video as well. I feel like I'm always promising things in future videos. But that's just the way it is to try to keep these things relatively short. And I know this video is going to be pretty long. Another way to describe this kind of missing piece, this common sense piece, is to point to the fact that machine learning systems can't handle abstract concepts. So they don't, they're not able to sort of think in abstraction, at the level of language. This is semantics, the meaning of words and how the words relate to the world, and to each other. Systems like Chad GPT are just manipulating symbols, but without any understanding of like the meaning the actual meaning the semantic meaning of those symbols, or what the what those sentences are actually describing the world. And so this is related to symbolic representation. If you remember the discussion of classical AI, and deductive reasoning from earlier, machine learning systems don't have this kind of map of symbols and categories of symbols and relationships, and domain trees and all that. They would that they would need to really solve this. So for example, you could have a medical diagnosis system that's based entirely on machine learning, and induction. And it would probably recognize flu symptoms based on its training data, maybe even like 99% of the time, right. But what about more obscure illnesses? Or what if there was a Reddit post that jokingly talked about using edible marijuana as a cure all for the flu or things like that you want to you want to have a system that's going to think these things through. So it's fine for a medical diagnosis system to learn about various diseases and treatments by studying medical textbooks. But then we want a system to develop a related set of abstract concepts that it then reasons with, rather than simply pattern matching words and phrases. I mean, do you see what I'm getting at here, we want these systems to have some level of understanding and capacity for contemplation. Essentially, AI systems need abstract causal knowledge, top down knowledge. In addition to the ability to learn via inductive reasoning, this might be solved by combining top down and bottom up approaches by combining deduction and induction. And some AI researchers are are already exploring this possibility. Another way of describing this piece is to say that AI systems need the capacity to develop theories about the world theories that can be revised based on experience and counterfactuals. On testing, essentially, to quote Gary Marcus and artists Davis, no fact is an island to succeed, a general intelligence will need to embed the fact that it acquires into richer overarching theories that can help organize those facts. We saw this with large language models, they don't understand anything, they're saying they're just stochastic parents, as people say, kind of dumb machines. In short, we want a thinking machine that is developing an ongoing model of the world that is full of interconnected concepts with which it can reason we humans do this all the time, we associate a new piece of information or a new concept with prior concepts, and memories through association and analogy. Oh, this reminds me of the other thing, that kind of thing. Sometimes those are things that happen to us and even contain a feeling and a sensation. I mean, I don't think emotion or embodiment are necessary for true intelligence. But there is also more to explore there. I think, again, in a future video, the same thing goes for desire, right? Like I think there's something maybe going on between desire and intelligence. We, for us, humans, our model of the world is shaped in part by our desires and motivations. And our desires are in turn shaped by our model of the world. The contour of the world of possibilities is shaped by our ideas about what's possible. I think there's a lot more to explore around desire and understanding. But again, future video, a larger point that I want to make is that technology is always downstream of theory. Let me say that again, technology is downstream of theory. Computer Scientist Erik Larson reminds us of this in his wonderful book, The Myth of AI. As we can see, by now, the current approach to achieving general intelligence is not built on any viable, robust theory of intelligence. Instead, researchers just keep piling on more data and faster processors with a kind of blind faith. if that machine learning is all we need, I guess, based on the idea that humans with their neural networks, you know, that seems to be all we needed. I guess that's kind of the idea. But I don't think there's a really a one to one correspondence there. And I've explained that a little bit in this video, but I think there's a lot more to unpack there to in a different video. Anyways, there's this notion that we humans are blank slates, and there should be no need for prior knowledge, hard coding, all that stuff. So they've created these systems that are impressive at a very narrow set of tasks. And they do an okay job creating the illusion of intelligence. But perhaps there is one theory of mind that would address most of these missing pieces. That's abductive reasoning. abductive reasoning is a lesser known form of inference that hasn't been studied as extensively and really only came to prominence 100 years ago with the philosopher Charles Sanders purse abductive inference is forming hypotheses based on observation, experience, and common sense. It's essentially making educated guesses is known as inference to the best explanation. abductive inference is essentially making guesses or hypotheses that you then confirmed through deduction, investigation, experimentation, etc, etc. For example, you come home, and the living room light was on when you arrived. But you're confident that you turned it off when you left, then you form a hypothesis that someone must have been in your house while you were away. Might not be true could have been a smart light system or something else. But this is the kind of example of talking about, essentially the kind of thinking we associate with invention and detective work and scientific discovery. It's like, it's flashes of insight. It's those kind of eureka moments where you may have woken up from a nap or in the shower, or daydreaming, and suddenly, something comes to you like the solution to a long standing problem just pops into your head. One of the most famous examples is a ghost pick who days Daydream reverie about the snake eating its own tail, but inspired his insight into the structure of the benzene molecule and the fact that it's a ring structure. That kind of thing is an example of abductive reasoning. So in contrast with induction, inductive reasoning moves from facts to generalizations that provide a theory of regularity. That can be disproven. In contrast, abduction moves from specific observation to a hypothesis that might explain it. It's a way of moving between modes of thinking that connects deduction and induction with flashes of insight. It's kind of the glue that holds deduction and induction together, induction basically requires abduction as a first step. This is why I think getting machine learning systems going on a new dataset requires human supervision in the beginning, we bring our general knowledge of the world to bear on the problem or on the training set. For example, when training AlphaGo, the Google DeepMind team used human inferences abductive inferences in other words, to design a system to successfully accomplish tasks. They supplied basically, like wisdom about go from the outside for the inductive framework they were building. I think the best example of deductive reasoning is the Sherlock Holmes stories, right? In fact, there's a great quote from the fate of the Evangelion where Sherlock Holmes says, when you have eliminated the impossible, whatever remains, however improbable, must be the truth. It's pretty good definition of abduction. And we see with Sherlock Holmes or Colombo or any kind of archetypal detective, that they're drawing on an understanding of human nature and behavior and past experience, and a general knowledge of the world to identify patterns and connections between the scant list of facts to arrive at a hypothesis that can then be confidently tested through observation. So abduction, unlike deduction, or induction is not necessarily linear thinking. There are often these kind of mysterious unconscious leaps, it often manifests as kind of a spontaneous insight or a flash of insight. You know, sometimes, it's conscious, sometimes it's automatic. And although it does require deduction and induction, I think it points to this more kind of mysterious phenomenon in our thinking, where hypotheses and solutions spontaneously emerge as a result of spending a lot of time with a problem and letting it kind of percolate in our maybe our subconscious or something. These ideas these new solutions or new thoughts arrive like fully formed in your mind, right? Like an inventor or detective or a medical doctor when diagnosing an unusual set of symptoms. So like, what is going on with abductive reasoning? We don't we don't really know have and this makes it hard to model in code. I mean, perhaps it's something quite deep a pointer to the nature of mind and consciousness. But, again, that kind of question is better explored in another video. In any case, this type of reasoning gets to the heart of what it means to be human, I think and to think adaptively. And creatively. The bottom line is, with abduction, an AI system could reason with incomplete or inconsistent information. Again, as we've seen, current AI systems only know what they've seen in their training sets. So because induction is always brittle, things are always subject to change or exception. Abduction is necessary to provide explanations in particular instances and to adjust the inductive knowledge set to hypothesize about the nature of the outlier and to revise internal models. So the moral here is that systems AI systems need to be adaptable. And we need to address all of the missing pieces that I just listed. We want these systems to be able to handle imprecision, vagueness and fuzziness in the real world. I mean, that's part of what it is to be intelligent. There has been some academic research into addressing abduction. But it seems that these AI researchers are exploring the idea and testing it using large language models. And so I don't know if that's going to work because using basically induction to achieve abduction doesn't seem like the application of theory properly. So again, I don't think true intelligence is possible without not only addressing these missing pieces, but solving abduction, but I guess time will tell. I did ask the three most popular chat bots, whether they are adept at abductive reasoning, I asked them, are you able to perform abductive reasoning? Here's what Chad GPT said, Yes, I can perform a form of abductive reasoning to some extent, I can generate hypotheses or explanations based on the information you provide in the knowledge I have up to my last training data in September 2021. However, please note that my ability to engage in abductive reasoning is limited to the patterns and information present in that training data. For complex or specialized problems. It's always a good idea to consult with domain specific experts who have access to the most current information and research, you see how it can only reason when a pattern matches on the same patterns. And its training data, Google's Gemini said essentially the same thing. And then when I asked Claude Claude was more interesting and honest, it was the only one that really admitted to not having any abductive capabilities. I don't have specific capabilities for abductive reasoning, but I can try to explain the concept. abductive reasoning involves making inferences to the best possible explanations based on limited or incomplete information. It typically starts with an observation or set of observations, then seeks to find the simplest and most likely conclusions that could explain those observations. While I don't have explicit programming for abductive reasoning, some of my machine learning capabilities allow me to make logical inferences and provide likely explanations for limited information. But I don't have the same flexible abductive reasoning capabilities that humans possess. My reasoning is limited to what my training data and algorithms enable. Another challenge with machine learning is that it's greedy and requires massive amounts of data systems like Chad GBT are voracious, and they're already running out of training sets. And that's why they're creating like synthetic data, which seems crazy and doesn't seem like it's the right path to to follow. To solve that. There have been numerous stories in the news recently of major AI companies like running out of data. So those are just some of the missions that I wanted to mention. I'm sure there's other missing pieces too. And I wanted to keep this discussion kind of limited in scope to whether or not there's any kind of intelligence there. But yeah, there's other important issues like bias and lack of fairness, misinformation, deep fakes, alignment, more generally, environmental impact of running all these systems in the cloud. And then, of course, the impact on employment and the economy. But you know, I offer all this, to underscore the point that intelligence is really hard and kind of mysterious. Another way to summarize, what's missing from today's systems is trust. As Gary Marcus and Ernest Davis have said, the core problem is trust. Current systems can't be trusted with anything important. We've seen this with Google's experimental AI search overviews with the glue and the pizza and the rocks and everything. So in short, what's missing today is basically broad intelligence that we can trust. Another way of summarizing is to say that we don't have top down intelligence the shortcomings of machine learning is that it is bottom up, and the knowledge is only gathered from observation. But intelligence also requires top down reasoning based on prior knowledge and background knowledge about the world Google Search has has been combining machine learning with symbolic manipulation for years. And so it's kind of a mystery to me as to why their AI search results are so unreliable. So how are we going to get to artificial general intelligence, solving for common sense and true reasoning will be a real revolution in artificial intelligence, no doubt, but I don't really see how we can get there with current systems. So again, why is there so much hype right now, in light of all these missing pieces? Why are AI developers so confident that AGI is around the corner? I mean, they should understand everything that I've been saying. And I think, you know, it's a combination of cynical puffery and a form of groupthink. It benefits these companies for people to think that we're almost there. And few people in the industry have any incentive to question these theories underlying the current approach to intelligence. So if no AI company is pointing to some novel approach beyond machine learning and large language models, I don't see how we're going to reach AGI by 2029. As Kurzweil predicts, solving for common sense and true reasoning will be the key, we need a more robust theory of mind. In fact, open AI has acknowledged these shortcomings. In essence, Sam Altman, the CEO of open AI, was quoted as saying that he thinks that we've kind of reached the end of the road with current approaches and large language models, he says that these models have reached their limit, and that any future advances will will require new ideas, that more progress will not come from making the models bigger, and that will make them better in other ways, but he doesn't really say how exactly, anyways, his statement suggests the GPT for the latest model from open AI could be the last major advance to emerge from the company's strategy of making models bigger and feeding them more data. In fact, Mira Marathi, the open AI CTO, this month made a similar comment inside the labs, we have this capable models. And, you know, they're not that far ahead from what the public has access to, for free. So open AI has not given any indication of the kinds of alternative research that they're using to create a new approach. If they aren't even doing that. So maybe they are and we'll we'll find out. Okay, to wrap up, people like Kurzweil and those in the orbit of open AI, are very confident that AGI is right around the corner. You know, feeding this kind of hype is obviously in these companies interests. But it also creates a kind of group thing and this kind of overblown exuberance that is, I think, detached from reality. Maybe we need a new generation of AI researchers to quote OD on Darwish, the chair of UCLA is computer science program back in 2019. We need a new generation of AI researchers who are well versed in and appreciate classical AI, machine learning and computer science more broadly, while also being informed about AI history. AI researchers must draw not only on the many contributions of computer science, often forgotten in today's enthusiasm for big data, but also on a wide range of other disciplines, to from psychology, to linguistics, to neuroscience. And I would add philosophy to that list as well. So if the goal is to create true general intelligence, it needs to be able to behave in ways that are helpful, adaptable and robust, it needs to be reliable. So if and when we get there, we may find again that we have created a wholly new form of intelligence that is not at all like human intelligence. So I'm skeptically optimistic. Again, at the end of the day, I am a technologist who and a futurist who loves to think about these things, and they get excited about technological advancement. But I am skeptical of the outlandish claims that are being made right now about AGI and that kind of thing. I think I think we, the general public deserves the truth about today's technology. And hey, maybe there are developers and AI researchers in a lab somewhere working on abduction or symbolic representation, or somehow solving common sense. I don't know I hope so. In this video, we haven't even touched on what makes us uniquely human, which is, I think, goes beyond intelligence and includes imagination, intuition, emotional intelligence, embodiment, and wisdom. In my next video, I want to dive into the ideologies that are driving AI development in Silicon Valley today, and that includes transhumanism cyborg ism and these other kind of tech Gnostic ideologies that are coming out of Silicon Valley. Again, I'm just sure I got some technical details wrong or glossed over something important. So please feel free to correct me in the comments or send me an email or something. If you made it this far in the video, I want to thank you so much for sticking with me. I hope it was helpful. I hope you enjoyed it and I hope it has enlightened us

People on this episode

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

Hard Fork Artwork

Hard Fork

The New York Times
Weird Studies Artwork

Weird Studies

Phil Ford and J. F. Martel
Your Undivided Attention Artwork

Your Undivided Attention

Tristan Harris and Aza Raskin, The Center for Humane Technology