
Cosmic Intelligence
Welcome to Cosmic Intelligence (formerly Spiritual But Not Ridiculous), a podcast that explores philosophy (Western and Vedic), consciousness, cosmology, spirituality, and technologies in the broadest sense—technologies of the sacred, of transformation, and of the mundane. As we enter this age of artificial intelligence (AI), we focus in particular on AI and its implications for humanity, questions of consciousness, AI safety and alignment, and what it means to be human in the 21st century, as well as its impact on our shared worldview. Since worldviews create worlds we will always keep one eye on our shifting worldview, hoping to encourage it along from materialism to idealism.
In terms of consciousness and spirituality, we also explore spiritual practices and other ways to expand consciousness, the importance of feeling our feelings, how to cultivate compassion and empathy, find balance, and lean into fear as a practice. Sometimes we have guests.
We approach all subjects from a grounded and discerning perspective.
Your host is Chad Jayadev Woodford, a philosopher, cosmologist, master yoga teacher, Vedic astrologer, lawyer, and technologist.
Cosmic Intelligence
Will Artificial Intelligence Make Us More or Less Wise?
In this episode I explore the complex relationship between artificial intelligence (AI) and wisdom, particularly focusing on discernment. I argue that while AI can hinder discernment by perpetuating biases and misinformation, it also holds some potential for cultivating it through tools that aid meditation and self-reflection. I also emphasize the importance of truth and self-awareness in this "age of AI." Ultimately, I argue that discernment is a uniquely human quality that requires ongoing effort and vigilance, whether aided by AI or not.
This one was a long-time coming so I hope you get as much out of listening as I did writing it!
More on OpenAI's recent decision to remove some of the content guardrails on ChatGPT.
This episode was adapted from a guest post on Michael Spencer's AI Supremacy newsletter.
Want to talk about how Chad can assist you with your own transformation?
Book a free consultation!
Join Chad's newsletter to learn about all new offerings, courses, trainings, and retreats.
Finally, you can support the podcast here.
I recently wrote a piece for substack asking the question, Can AI help us to become more discerning? You know, is it possible to use it as a tool in a way, to cultivate wisdom and things like that, right? And in the process of answering that question, I go into this question around the importance of truth for maintaining a healthy society. And yeah, a lot of the stuff that's been happening in the news recently has been on my mind. So it's it comes up in this piece, as you'll see. And before we get into it, I just wanted to say real quick this disclosure that I currently consult with Google as an AI product counsel, as an AI lawyer. So these are my opinions, and not those of Google or alphabet. With all the talk about intelligence these days, I've been thinking a lot lately about whether artificial intelligence can help us become more wise. After all, if you look around, despite our technological advancements, there seems to be an unsettling dearth of Wisdom in the World. Society is bereft of wise elders. Is AI making this worse? Can it improve the situation? This post, this discussion, is my attempt at answering those questions. Everyone seems to have their own version of what human evolution looks like transhumanists like Elon Musk or Sam Altman think that we have gotten as far as we can on our own and should outsource our evolution to machines, eventually merging with machines and transmogrifying into pure information. But if that evolution is to be more than a maximization of extractive, mechanistic efficiency, this would necessitate the creation of wise machines, and even if machines do eventually achieve some sort of super intelligence, it's not at all clear, given our current technology, that wisdom is computable. Therefore, I think the future of humanity involves evolving consciousness through technologies of the sacred and the mundane, through various mindfulness and spiritual practices, alongside artificial intelligence and other technologies like that, because it is arguably our greatest challenge as a humanity. At the moment, I want to focus here on one aspect of wisdom, discernment. So in this discussion, I will show how AI can both hinder discernment and potentially help to cultivate it. A quick note regarding yoga, dantic philosophy. I have a master's in philosophy, cosmology and consciousness, and have studied philosophy for over 20 years. In this discussion, I rely heavily on yoga dantic philosophy to reason through these questions, because, in my experience, that tradition offers the most robust framework and set of practices for cultivating wisdom and discernment. Yoga philosophy is another term for Sanatana Dharma, the various philosophical traditions that arose in ancient India, from the ancient Vedas to classical Tantra and Kashmir expressed in ancient texts like the Bhagavad Gita, the tantra loka and the Yoga Sutras. Okay, let's talk about discernment, which I'm calling the antidote to bullshit. There's no doubt that discernment is sorely lacking in these tumultuous times, misinformation multiplies, and nobody seems equipped to sift through it for the truth. From accusations of election fraud to the great reset, conspiracy theories abound. In addition, the right wing political strategy for years now has been to flood the zone with bullshit at muzzle velocity, to use Steve bannon's phrase, we all live in our own filter bubbles, disconnected from others. The arrival of generative AI chat bots like Chad GPT seems to make that even more challenging. After all, nobody is better at bullshitting than a large language model, as philosopher Harry Frankfurt famously pointed out in his book on bullshit, bullshit is a much greater enemy of truth than lying, because bullshit is indifferent to truth. It erodes the very concept of truth that is the danger presented by machine learning systems today. I think so. What can we do? The ancient Indian yogis taught that discernment is the highest yoga practice, the only one leading directly to spiritual liberation. According to the yoga tradition, the discerning function of the mind, the buddhi, is impaired by what are known as samskaras, or mental emotional impressions left by unmetabolized past experiences or unquestioned conditioning received from family or from society. The practice of various forms of yoga and especially meditation are designed to dissolve these samskaras, thereby liberating us. The classic yoga metaphor for what these practices do is polishing the tar. Irish or smudged mirror of the mind. This brings to mind philosopher Shannon valors metaphor for AI as a flawed mirror reflecting humanity's past assumptions, cultural views and biases back to us with AI as a human mirror. It is not clear that the smudged mirror of the mind can be polished by yet another tarnished mirror. Maybe wisdom is a uniquely human faculty that cannot be augmented by a machine, as I argue in my forthcoming book. But as with all things, the answer is more nuanced. Before I consider how AI might help us to cultivate discernment and wisdom, let's define discernment a bit further in many Eastern philosophies like yoga, Danta and Buddhism, discernment is the ability to see and grasp increasing values of truth. It is being self aware enough to avoid projecting ego distortions onto reality, onto situations, onto people and onto yourself. Discernment, in other words, is the ability to evaluate and assess without jumping to conclusions or acting on unconscious biases to channel neuroscientist and philosopher Ian McGilchrist, it is the capacity to think with both the left and right hemispheres of the brain, the rational and the intuitive, and to synthesize the Two. If discernment is about seeing increasing values of truth, what is truth like? How can we know what is true? And can artificial intelligence help with that? For an AI system to know what is true today, it has to be told by humans, developers ground AI systems and facts based on text from books, journals and the web from trusted sources, essentially. And they glom on things like retrieval, augmented generation systems or something like that, and they measure the model against a facts benchmark. But these are all imperfect, short term solutions to this intractable challenge with truth and AI. But what is truth? How do we know what is true? If truth is a balance of science, reason, intuition and imagination, as Ian McGilchrist argues convincingly in his book, the matter with things, then is it a uniquely human faculty similar to McGilchrist view from a yoga vedantic standpoint, there are relative values of truth. But what does that mean? I mean, it sounds at first like relativism, and although relativism is a topic for an entire separate discussion, briefly, I think relativism has become a pejorative term because we live in a world dominated by linear left brain black and white thinking. In fact, the inherent relativity of truth does not diminish its value. Although truth necessarily exists on a continuum, not all viewpoints are valid. The earth is round. Empathy is a virtue. January 6, 2021 was a violent insurrection by Donald Trump supporters, whatever his intentions, Elon Musk did give a Nazi salute at Donald Trump's recent inauguration. White Nationalism is on the rise. And I cite these political examples not to be inflammatory, but because they are so often filtered by AI systems like Google's Gemini. We'll get into more of that in a minute. Truth is also contextual. For example, a map of your city is true to the extent that it accurately represents the layout of the city, but it does not capture everything that is true about your city, the sounds, the smells, the lived experience of walking its streets or the city's complex history. We can see from this map example that factual accuracy depends on perspective, motivations and priorities. Zooming out the truth of global borders becomes more fuzzy or requires a geopolitical lens. The disputed borders, for example, in Kashmir, the Korean Peninsula, or the Western Sahara, or, most famously, the ISRAEL PALESTINE border, require an understanding of historical and political truths. But there is still truth there. By framing it as relative values of truth, I am making it clear that there can never be a sort of flattening of the world into objective truths, at least not outside of science or maybe philosophy. The quote Ian McGilchrist, truth and trust go together. One cannot have trust in a society where there is no truth, and one cannot be true to a society in which there is no trust. Truth is also a process, not a thing. It is an encounter, as in science, increasing values of truth reveal themselves through open minded experience over time and extensive experience is not a thing that AI systems have, at least not yet. There is something about living through time with continuity. Be acting on innate drives that are sometimes thwarted, that leads to greater truths and even to wisdom. AI systems don't do that today, but they likely soon will, to some extent, especially those that are embodied as robots. Another way to understand truth is to recognize that there are different categories or levels of truth. There are scientific and empirical truths. Of course, we can test nature, and nature will behave in a relatively consistent way, so long as it fits within the bounds of the current scientific, materialist worldview. The earth, for example, moves at 67,000 miles per hour, or 107,000 kilometers per hour around the sun, the universe is roughly 13 point 7 billion years old. So scientific statements can be true at least for a time within a particular cultural moment. Scientific truths are the closest thing secular moderns have to objective truths. However, science cannot tell us everything about the world, life, being human, or what is true. Legal truths are more about justice, equality, human rights, property rights and what happened or what someone said, because they involve human behavior, they are moral and ethical judgments about how to treat people and how to behave. So already, with a relatively grounded and practical endeavor like the law, we find ourselves in a philosophical or moral realm of truth. Moving up a level, there are timeless philosophical, spiritual or religious truths that are typically only true for the adherence of the particular spiritual or religious tradition within which they arise. For example, all is one as above, so below the world is merely illusion, obfuscating a deeper reality. The cosmos was created as an act of play and love by a great goddess. Life is sacred, etc, for better or worse, in our secular scientific age, these higher truths are seen as antiquated, despite being so widely held. And finally, there are all those truisms that are handed to us as children by our parents and the society we are born into, although those are more like rules of thumb than absolute truths. For example, everyone deserves a chance. America is the greatest country on Earth. If you work hard, you'll get ahead in life. Life is a violent competition for resources. Happiness comes from having a nice house. Dying for your country is noble. Capitalism and democracy are the highest forms of social organization, etc, etc. These are all human values that evolved through the development of cultures and civilization in the industrialized West. They are not eternal truths, but they are true for many people. So as we can see, many so called truths are ultimately variations of philosophical, spiritual, religious or even political truths. What does it mean to be a good person, and how is that informed by your understanding of the nature of reality, cosmology and metaphysics? How is that, in turn, informed by your relationships, your community and your citizenship. Is it okay to be woke? What even is woke? This question of truth is going to become only more thorny with the rise of AI systems. The more they think for us or help us to think, the more it matters how they are grounded in reality and whether or not, they are discerning. These are pressing questions for humanity, and I am arguing that we need to know how to answer them reliably in a way that leads to progress for humanity, either with the aid of AI or without it. This is important stuff. You before we get into how AI can both hinder and help our discernment, let's look at some markers of discernment. Let's try to figure out like, how we know that somebody or something has discernment if we are going to use AI to cultivate discernment and also attempt to fend off a degradation of discernment by AI, it is helpful to identify some markers for discernment. How do we know we are developing discernment and becoming more wise? Based on my understanding from studying Vedic and Western philosophy, there are some markers of wisdom and discernment that I have come across or experience. So I want to offer those, and in each case, I want to couple it with my own comments on whether AI systems are sort of able to help with that, or where they kind of stand with that. The first marker of discernment is that you hold what you know firmly, yet gently, rather than tightly. What you know is open to revision, like a scientist, like a good scientist, machine learning systems actually do this quite well, or well enough, as they continue to be trained. But let's I'll come back to that in a moment. The second marker is that you are open minded. And adaptable, which is kind of related to the first one. And you know, with AI systems, if they're trained appropriately, this is a quality that machines do exhibit, perhaps better than most humans, although, again, we'll come back to that in a moment. The third marker is that your whole identity does not rest on identification with some group or ideology. You can think for yourself, machine learning systems are certainly less susceptible to group think, but they are highly susceptible to what's in their training data. The next marker is that your internal state is less easily affected or sort of thrown off by external events. Here machines have another advantage, I think the fifth marker is the amount of drama in your life is decreasing, and machines today are pleasantly drama free. The sixth marker is that you're a good listener, open to other perspectives and viewpoints. Chat bots are very good at this, although they do tend to people please and act as enablers. The seventh marker is that you act with integrity. What you say and what you do are usually aligned, and given the examples we've seen recently of outright deceit by chat bots, this is a strike against current machine learning systems. The eighth marker is that you're kind, compassionate, accepting and less judgmental. Although today's machines cannot experience compassion, they can certainly simulate these qualities in a believable way. However, these markers are more about one's internal state of consciousness, and AI systems do not have one, at least not yet. Finally, a marker of an important marker, I think, of discernment and wisdom, is that you have a sense of humor, including about yourself. This is similar to the prior marker in terms of AI systems. They can kind of feign humor or simulate it, but without sentience, I don't think there can really be humor or self deprecation, that kind of thing. Okay, so now that we understand discernment and relative values of truth, let's finally evaluate the ways that AI can hinder or help with discernment. How AI hinders discernment today, AI systems cannot discern relative values of scientific, ethical, spiritual or legal truths without relying on humans to tell them what to value. After all, AI today is only trained on trillions of often conflicting statements that humans have made about what is true. So how could a mathematical algorithm possibly distinguish or discern among all these texts without human intervention? What makes one bit of training more true than another? More importantly, when it comes to breaking out of societal conditioning to evolve as humans, AI trained on past statements is arguably a giant conditioning reinforcement machine. AI's deleterious impact on discernment is already becoming a serious challenge, because you have people like Elon Musk going around saying that AI systems are woke, and that he is creating an anti woke AI system. Grok, in other words, likewise, two years ago, the National Review published a piece accusing Chad GPT of left leaning bias, protesting that it wouldn't explain why drag queen Story Hour is quote, unquote bad. So you can see here that it matters who makes your AI system this technology is not neutral. We've already seen how hallucinating, biased, misaligned or deceitful AI systems can mislead users with the leaders of AI companies hyping up these systems as being mere months or years away from being super intelligent, it's no wonder that the average AI user thinks that Silicon Valley has essentially created a super human being already. Of course, people are going to believe these chat bots. They are already trusting them as therapists, girlfriends, boyfriends, advisors. Another challenge with using AI to cultivate discernment is that many AI systems have guard rails in place that make using them to evaluate, for example, whether the 2020, US election was stolen. Challenging, for example, Google Gemini will not discuss this because of a general filter on political topics at least as of this recording and given Chinese government censorship, there's no telling what deep sequel say or how its factual grounding is achieved. However, all hope is not lost. All right, so let's, let's look at ways that AI can maybe help us with discernment. AI is good at recognizing patterns and is certainly dispassionate. So here are two specific ways that it can help with spiritual development, getting better at meditation and pointing out blind spots in your thinking. AI and meditation. Ultimately, meditation is about being present with whatever is arising and tapping into the discerning power of silence and sometimes the space between thoughts. Although a regular meditation practice is helpful in reducing anxiety and feeling more focused, it is equally effective at cultivating discernment. After all. Well, the more space we have between thoughts, the less of a hold they have over us, and the more subject they are to critical examination. The technology journalist Casey Newton has been using AI to help him get better at and more consistent with meditating. He uses Claude to create a custom meditation, and then tells Claude about his experience with that meditation. Claude then uses those observations and feedback to craft another meditation and offer more tips. It's a virtuous feedback loop. In addition, the meditation apps calm and headspace have both added AI to their offerings, there are also interesting new AI meditation apps like vital that offer a similar approach, and you could also, of course, learn a more traditional meditation technique from the Daoist yoga, dantic or Buddhist traditions. Another way that AI can help with discernment is through self aware chat bot conversation, where you ask a chat bot questions, inviting other viewpoints, and you mindfully consider the chat bots responses. I tried this with Gemini and Claude, and the responses always caused me to reflect and to revise my own thinking on the topic. For example, I asked Claude whether AI chat bots are more of a boon or a hindrance to the cultivation of wisdom and discernment, and it reminded me that chat bots can also provide a safe space for exploring ideas without judgment. But it also pointed out that chat bots can lead to intellectual laziness and over reliance on their unverified claims. So again, it all comes back to self awareness next, in light of Elon Musk's statements about the dangers of quote, unquote, AI, I asked the ex chat bot grok what makes an AI system woke, and its response was surprisingly reasonable, despite Musk's stated aims in making it anti woke. Grok described woke AI systems as, quote, being aware of social injustices, discrimination and bias, especially around race, gender and other identity markers. End quote. Grok then helpfully went on to underscore the challenge of achieving neutrality when it comes to favoring certain political or social viewpoints. So maybe even grok can be helpful in cultivating discernment. Who knows? Finally, chat bots are helpful in directing your attention in uplifting and enriching ways. For example, you might feel like you're watching too much TV or consuming too much social media. You could ask your preferred chat bot to suggest better uses of your time and attention based on your interests. Chat bots are very good at recommending books on specific topics, for example, like spirituality, or, I don't know, totalitarianism, or maybe you want to be more creative, similar to Casey Newsom's use of Claude to get better at meditating, you could ask for guidance and encouragement around writing, painting or music. In a kind of virtuous feedback loop. As we say in the yoga tradition, your consciousness is shaped in large part by what you pay attention to, so be intentional with your attention. It is our most precious resource addressing the chicken and egg conundrum of discernment. Crucially for this, AI aided discernment practice to work, the average AI user has to think to ask AI systems for constructive feedback in the first place they have to want to cultivate discernment. In other words, there is this sort of chicken and egg problem. Egg problem, where first you have to want to cultivate discernment, and that requires some awareness that there's a need to cultivate it. So yeah, you can see there's a little bit of a sort of bootstrapping issue here. The popularization of yoga in the West may point to a way out of this conundrum. Most people think that yoga is a formalized system of stretching and acrobatics that has health benefits. So yoga newbies start doing yoga asana in the hopes of feeling better, but then they realize that these poses are kind of a meditation and that it's part of a much larger system of transcendental practices, sacred rituals and philosophy. So perhaps AI, meditation or therapy apps marketed to reduce stress, improve decision making or help with relationship difficulties, could serve as a similar entry point for discernment. Perhaps users of these apps would sign up to achieve this promised sense of abiding peace, but then maybe a desire for self awareness and discernment would naturally arise in the course of using them. But of course, what discernment are these products trained to cultivate? Can we trust the companies behind them? Doesn't that also require discernment? I think this is where community and what I call the gradually deepening spiral of discernment. Come in, we might initially trust someone in our life, partner, a friend, a family member, and they could recommend an app that has helped them with mindfulness or discernment. And then we might develop a small amount of discernment and then use that to make our own evaluations. But. In any case, there's no avoiding trial and error, making mistakes and then learning from these mistakes. It's a lifelong process. Sometimes suffering is even necessary. So I encourage you to perform some discernment experiments with your preferred AI chat bot. Ask it to challenge you, maybe even add that as its foundational sort of instruction. For example, when I asked Gemini if it would challenge me, it said, in essence, that it would present alternatives where relevant, focus on logic and evidence, be open to being wrong. Approach our dialog as a collaboration, acknowledge areas of uncertainty and provide sources. So, you know, it's a start, all right, so let's, let's wrap this up. We have seen how important cultivating discernment is in this so called intelligence age, in which misinformation spreads like wildfire and everyone has their own set of facts. I think this misinformation epidemic combined with this emphasis on intelligence over wisdom is enabling some troubling geopolitical developments. So with that as a backdrop, we examined the ways that AI both hinders discernment and potentially helps cultivate it, and we saw how because they are simply complex mathematical formulas running on undifferentiated lines of text, machine learning systems can degrade discernment and eradicate any agreed upon truths. But on the flip side, used mindfully, such systems can also serve as tools for self awareness by teaching meditation and identifying blind spots, helping users to explore diverse perspectives and to cultivate discernment as always, AI is not a complete or perfect solution. The challenge for humanity is to leverage AI's potential for growth while safeguarding against its capacity to distort truth. This requires a conscious effort to develop discernment. Furthermore, discernment and wisdom may be uniquely human qualities that like liberty, require eternal vigilance. Ultimately, we must learn to trust our own discernment, either with the assistance of AI or without. So this post is an invitation to contemplate how you can cultivate discernment in your own use of technology. I And by the way, a little bit about myself. I am an embodied philosopher of technology focused on the intersection of consciousness and artificial intelligence, including AI ethics and policy. I began my career as an AI researcher, and I worked as a software engineer, a technology lawyer and a product manager in Silicon Valley. I've also advised leading tech companies like Twitter and Google and Airbnb and meta, as well as policy and advocacy organizations like tech NYC. I bring a multi faceted perspective to the intersection of technology consciousness and human potential, and I aim to create a more human centric future in the age of technological advancement, finding wisdom in the intelligence age with technologies of the sacred and the mundane, I have a master's in philosophy, cosmology and consciousness from CIS and a JD from the University of Colorado. I also teach a transcendental style of yoga that I learned while traveling and living in India. Thank you for listening, and if you have any thoughts or questions, I invite you to comment on the post or wherever you can find me online. You.