Cosmic Intelligence

Hit of Happines: The Philosophy of Artificial Intelligence with Brian Dubow

August 20, 2023 Chad Jayadev Woodford Season 3 Episode 3
Hit of Happines: The Philosophy of Artificial Intelligence with Brian Dubow
Cosmic Intelligence
More Info
Cosmic Intelligence
Hit of Happines: The Philosophy of Artificial Intelligence with Brian Dubow
Aug 20, 2023 Season 3 Episode 3
Chad Jayadev Woodford

This week we're sharing an episode of a podcast called Hit of Happiness, where Chad spoke to Brian Dubow about artificial intelligence and consciousness, among many other things. This may have been Chad's favorite interview so far, just because he loves talking to Brian and because they explored so many topics in such a coherent fashion.

This interview is a great teaser for this three-part series Chad is working on about the myth of artificial intelligence, the nature of consciousness, transhumanism, spiritual machines, and metaphor. As our long-time listeners know, Chad likes to do these deep-dive episodes once in a while. It’s taking a while to do all the research and writing so we can finally record it. But we think it will be worth the wait. Look for the first episode of that series in the coming week.

Support the Show.

Want to talk about how Chad can assist you with your own transformation?
Book a free consultation!

Learn more about Chad's newly relaunched meditation course.
Or book a Vedic astrology reading.

Join Chad's newsletter to learn about all new offerings, courses, trainings, and retreats.

Finally, you can support the podcast here.

Cosmic Intelligence+
Become a supporter of the show!
Starting at $3/month
Support
Show Notes Transcript Chapter Markers

This week we're sharing an episode of a podcast called Hit of Happiness, where Chad spoke to Brian Dubow about artificial intelligence and consciousness, among many other things. This may have been Chad's favorite interview so far, just because he loves talking to Brian and because they explored so many topics in such a coherent fashion.

This interview is a great teaser for this three-part series Chad is working on about the myth of artificial intelligence, the nature of consciousness, transhumanism, spiritual machines, and metaphor. As our long-time listeners know, Chad likes to do these deep-dive episodes once in a while. It’s taking a while to do all the research and writing so we can finally record it. But we think it will be worth the wait. Look for the first episode of that series in the coming week.

Support the Show.

Want to talk about how Chad can assist you with your own transformation?
Book a free consultation!

Learn more about Chad's newly relaunched meditation course.
Or book a Vedic astrology reading.

Join Chad's newsletter to learn about all new offerings, courses, trainings, and retreats.

Finally, you can support the podcast here.

Chad Woodford:

This week, I'm sharing an episode of a podcast called hit of happiness, where I spoke to Brian about artificial intelligence, and consciousness among many other things. This may have been my favorite interview so far, just because I love talking to Brian, but also because we explored so many topics in such a coherent fashion. And by the way, I'm recording this right now in the middle of the hurricane in Los Angeles. So you might hear a little bit of rain sound in the background, but yeah, little ambiance. Anyways, this interview is also a great setup for this three part series that I'm working on, about the myth of artificial intelligence, the nature of consciousness, transhumanism, Spiritual Machines, and metaphor. As my longtime listeners know, I like to do these deep dive episodes once in a while. So it's taking a while to do all the research and writing so I can finally record it, but I think it will be worth the wait. A few highlights from my conversation with Brian today include the fact that AI has pros and cons. It can automate monotonous work and allow more creativity but also poses risks like job displacement, that require broader societal solutions. Also views on whether AI can achieve phenomenological consciousness very. And in the episode in the interview with Brian I offer several reasons why this might be potentially an intractable challenge. And as I will be talking about in my upcoming three part series on the podcast, fears do exist about an uncontrolled kind of run rampant, super intelligent AI. But I think these capabilities are often overstated. Ai still relies on training data and does lack common sense. And there's so many other reasons that we probably won't get there anytime soon. And finally, the future remains uncertain around AI. But there are reasons I think for optimism. If we can mindfully guide the progress of these systems and support, it's complementing our human strengths rather than replacing humans altogether. I think the key is maintaining humanity's agency and emphasizing humanity's potential, alongside increasingly capable AI systems. So that's just a little bit of a preview of the conversation I had with Brian. So here we go. This is my interview with Brian, I think you're gonna really enjoy it.

Brian Dubow:

Hello, fellow happiness seekers. Welcome back to the hit of happiness podcast all about helping you reframe your reality, spread positivity and transcend your perceived limits. I met today's guests at a dinner party in LA a few weeks ago. And I felt like I could have spent days picking his brain. He has over 25 years of experience in software engineering, law, product management, AI research, and yoga and meditation training, and is now dubbed an artificial intelligence philosopher. At a time where everyday humans are finding ways to leverage chat GBT and other AI products on a daily basis. Today's guest is here to help us understand some of the ethical and philosophical implications of artificial intelligence and machine learning in our lives. While I'm personally still a bit hesitant to use artificial intelligence to do my everyday tasks, or head of happiness, I'm hoping this conversation with Chad will convince me otherwise. So with that, Chad Woodford Welcome to the hit of happiness podcast.

Chad Woodford:

Thank you, Brian. It's so good to be here.

Unknown:

It's awesome to have you chat. And I'm very excited for this conversation. Before we dive in, can you just give our audience a bit of a? Who are you? Where are you from? What do you do?

Chad Woodford:

Yeah, sure. So it's kind of a long answer, but I'll try to keep it short.

Unknown:

Give us the long run over here. Sure. Wherever that right. So I'm originally from upstate New York. And I've lived all over I've lived in Vermont, and Atlanta, Boulder, San Francisco, India. So now I'm here in LA. And yeah, I'd like to move around, I guess. I'm currently in grad school for philosophy, cosmology and consciousness at the California Institute of integral studies in San Francisco. But I've also been, like you said, an AI engineer, technology lawyer, product manager, I was actually a filmmaker. At one point, I went to film school. So I just I like school. I like learning things. And yeah, so I'm focused on kind of bringing together a lot of my experiences, which includes AI, philosophy, and spirituality coming from the Eastern perspective, too. And then yeah, just the technology background as well. That's fascinating. You just listed like eight different topics that I want to double click on. And I don't know if we have time to double click on all of them. But I think that one thing that I connected with you over was how we both really started in these corporate backgrounds. I was a consultant, you were a lawyer. I'd love to hear your track from you know, going to law school becoming a lawyer to now studying philosophy and how that all happened in the first place. Yeah,

Chad Woodford:

this gets into I mean, I think this is so relevant. For your topic to for happiness, because it gets into these questions about sort of dreams and happiness and conditioning and all that. So my journey was, like where to start, right. So I think, looking back, I made a lot of decisions in my life based on, I guess you can call it fear or practicality. And I was trying to find a way to fit my dreams and passions into a career that could make money. I grew up very working class. So I had this sort of like conditioning, that you have to play it safe and do things that are practical, career wise. And so I became an engineer, because I was good at math and science. And it's safe, it's practical, it makes money. But I did have this far, I've always had this spark of wanting to have an impact and to change the world in some way or to help people in some way. And so the idea behind law school was to get into some kind of like consumer rights work, or some kind of impact, where some kind of like policy work, something like that. And also, I wanted to be a writer for most of my life, I had this dream of being a writer. And so for me with the practical mindset, the compromise was being a lawyer, where you do actually do a lot of writing. But then of course, you make a lot of money. So, so yeah, that was the motivation behind that whole thing. And so I was a corporate lawyer, but I was doing some of that consumer work on the side, doing some pro bono and trying to find a way to kind of get into more of the policy work. And so in that process, and I think maybe you have a similar experience, I was really stressed out, being a lawyer is very stressful in the corporate world, especially, I was doing like a lot of large transactions and working in startups in Silicon Valley. And it was just kind of really draining. And so for me, I started to explore other things. And I started doing yoga and started really looking more into, like, inner questions or bigger questions or spiritual questions, that kind of thing?

Unknown:

Sure. And I bet when you were, so these inner questions that have popped up while you were at the law firm, right, and was the next step straight to India to answer those questions are part of that work?

Chad Woodford:

Yes, and no, I mean, the way I apparently do things is I sort of stick a toe in the water, but then I keep one foot on land. And then I keep kind of doing that back and forth until like, finally commit to something new. So it's, it's a process for me. So I did a teacher training while I was still a lawyer. So I was teaching on the side, and I was I had made a law practice for a while. And that was the way I was finding the kind of like, satisfying the interest while keeping one foot on the practical side. But then I was fortunate enough to work at Twitter in the early days as a lawyer. And that was then gave me some financial a financial windfall, which allowed me to take some time off. So then I started to write a novel about spiritual crisis and all these kind of big questions, and did that full time. And so that was kind of the thing, that reading the novel was me kind of opening up portal to step through, because it gave me the excuse to research a bunch of things I was interested in, and to have some experiential research, to do things that I wanted to do. But I guess I'm the kind of person who can't just go out and do the thing, I have to write a book, to give me an excuse to do the research to do the thing. So that included go into Burning Man, and then eventually, even plant medicine. I did Ayahuasca initially, because I had the idea to have my main character do Ayahuasca. So all that those experiences of going to Burning Man and getting more into going to Peru, and all that is what then kind of led to going really deep into the spiritual path? Yeah.

Unknown:

Wow. And I think it's really interesting the way you put that, because I think there's some people that can separate their professional lives and their spiritual baths. And they're able to say, you know, I work this nine to five or 97. And then I can be spiritual the rest of my life, I can still go to Burning Man, I can still do whatever. There's other people who feel the need for their entire life to be their spiritual journey, and figure out how to make money on your spiritual journey. And it sounds like you constantly kind of just waved back and forth and interweave the two until slowly but surely, you've gotten closer and closer to your life being the spiritual path. What are your thoughts on that? Yeah, I think that's right. I think that's right. I mean, it's interesting to frame it as a positive because I think sometimes I think of it as like me, maybe not having the courage to go just fully into the other direction. But at the same time, they do say that you shouldn't make your passion, your vocation, because then you'll come to hate it or something like that. I don't know if I believe that, but that's a thing they say. Yeah, so I mean, you went on this spiritual journey. You did Ayahuasca you went to Burning Man. Then eventually you went to India? What were your major takeaways from these types of experiences? That you know, everyone can get something out of it? Everyone can learn from?

Chad Woodford:

Yeah, well, what are we I have to say about that actually, just to go back to your last question was, yeah, for me, it's part of a longer theme. So like, as I was saying before, the part of the motivation for law school was wanting to make a difference. And I kind of learned as I was pondering this question and starting to try to do things with law in that direction, I started to have this realization that this Law Professor Lessig, Lawrence Lessig had, which is you have to keep sort of moving the lever up, in a sense. So he started off doing copyright reform, because he was passionate about trying to help people have access to art, and, you know, fair use and all that. And then he realized that the real challenge with trying to change copyright policy was actually the politics is broken in DC. And so then he decided to pivot and address the lobbying and just the ways that politics is broken, and DC, because if you don't address that, then you can't really change the way laws are made and how they affect consumers. And so I was inspired by that. But then I took it to the next level, realizing that you actually can't really change things, fundamentally, unless you change the way people think. And you change the kind of shared worldview that we have. So that's ultimately how I got to philosophy. But yeah, so I think, for me, the biggest way, the most effective way to make a difference in the world wasn't law. At the end of the day, it was actually like philosophy in the sense of recognizing that everyone believes in his materialist worldview, and if we can just shift that, and change the way they think, then there'll be open to, you know, more mundane things like policy change and different kinds of efforts in that direction. So anyways, I just wanted to kind of tie that together.

Unknown:

Yeah, no, thank you for bringing that in. Just double click on that the materialist worldview, they're saying, because politics are so centered around financial implications. And I guess the way companies will support campaigns that to lead to things that might not be ethical, or might not be the best for the world, that's stopping the love and light of the world. Is that what you're getting at?

Chad Woodford:

Yeah, kind of. Yeah, it's like, so when I say the materialist worldview, what that means is this idea that goes back to kind of back to like Descartes, or to the scientific revolution, which is that everything is matter. And because of that, there's nothing higher, there's no sort of like, mystical realm or anything like that. There's just matter and just sort of deterministic mechanistic reality, which is kind of meaningless and random. And everything we experience is just a fluke and all that that's the materialist worldview. And I think it runs deep, because it's kind of depressing to believe that you know, a lot of people, whether you are an atheist or not, I think a lot of people just don't have any meaning to derive from that or don't know where to find meaning in their lives because of that. And so, yeah, the way that that can play out or multivariate, but in terms of your question about policy, for example, the way that can play out is that because we don't consider nature, for example, to be anything but a bunch of like raw resources for us to use to our advantage, then we don't make policy that preserves that because there's not a sacred element to the world. We don't approach it that way. Does that make sense?

Unknown:

That does make sense. So it sounds like you found this passion? How did you start making an impact? And at what point did that come into play?

Chad Woodford:

Yeah, I mean, it's still in progress. You know, I think a large part, a large part of me being a yoga teacher is that I, my experience with the the yoga that I studied in India, was that it expanded my consciousness. And it changed the way I think, and I wanted to share that. And I do share that because I think it's so important to help people and give them practices that expand consciousness that give them the direct experience of unity, which is what yoga is all about. And through that process, you know, you start to tap into your inherent bliss nature, which is a form of happiness. So that's one way I've been doing it by teaching yoga and offering different things in that space. But then in terms of shifting worldviews more broadly. I'm in school for philosophy, cosmology and consciousness, because I want to find other ways of doing that. And I think that actually, and we'll get into this when we get into the AI, I think the AI revolution that's happening, is going to force us to reckon with our worldview and could actually be a vector for us to, like, shift the worldview and to kind of give humans more important role in the world.

Unknown:

I love that and just a level set, you know, you kept on using the word expand consciousness, you know, just to make sure everyone understands what it means to expand consciousness and you know, some people have watched that we worked documentary where they talked about elevating the world's consciousness, but what do you really mean when you say, you know, you Eat yoga to expand consciousness and you are on a journey to expand your unconsciousness.

Chad Woodford:

Yeah, so we can talk about what consciousness is first. And I can answer that. Yeah. Because it's relevant to AI too. I think a lot of the AI conversation is happening around consciousness. So yeah, so from a yogic standpoint, consciousness is fundamental, which is to say that everything is consciousness, and consciousness is. So consciousness is like the fundamental substrate of reality. And the consciousness then is as a form of play, and a form of love, actually, in the yoga world view, it's creating the world. With that intention, it's treating the world as a form of play. And we're all expressions of that. And so we're all kind of small pieces of this larger consciousness. It's if you put it in a western philosophical framework, it's idealism. That's the philosophy. So when you say expand consciousness, what that means is like, kind of the first step, and this is the first step people often experience when they practice yoga, is you dis identify with the mind, because the mind is just one small part of you. I think, most people in the world today are primarily identified with their mind. And so the first step is that you start to realize like your, you as yourself, are so much bigger and broader than the mind, you're actually this consciousness. So we're moving out of identifying with the mind expanding into this identification with consciousness. And then the yoga journey is starting to have experiences of, yeah, it's hard to talk about with words. But yeah, it's like, one part of it is you start to let go of conditioning. So conditioning is the stuff you get from society, and from your parents, and from, you know, peers and colleagues, that is maybe sort of like not totally true or not helpful to you or helpful to society. These are just kind of like wrong notions, you know, small ideas, that kind of thing. So you're getting rid of the conditioning. And that through that process, you're starting to become more and more sort of free, in a sense, more liberated. And as you go through that process, you're starting to expand your consciousness to include more and more of your bigger self and bringing in more and more of this unity experience. I don't know if that makes sense. But that's the basic process. Got it?

Unknown:

And for those who are new to this concept, what would you say is the first step they should take? Is it go to a yoga class?

Chad Woodford:

Yeah, I mean, I think that could be helpful. You know, it's interesting, because in the West, we primarily identify yoga as stretching. And there's so much more to it than that, but at the same time, because we're so disembodied, like, in the sense that we're so not in our bodies, we're so kind of living in our heads all the time. I think part of the reason that that has been so helpful and is such a good access point is that we should start with the body. So if you go to a yoga class, you know, it's called Asana, that's the stretching part of the yoga practice. If you go to an Asana class, that's going to help you to get into your body. And then there are you know, that practice alone is so good at putting you into a meditative state through the body. And you can have these experiences in that practice of unity consciousness of expanding consciousness, that kind of thing. And then once you've done that for a little while, then yes, you can start to explore other parts of the yoga practice or other modalities beyond yoga to the parts of yoga, that really helped me especially we're trying to Yama, so working with the breath. And that can be breath work. Also, these things called Trias, which are the most effective and powerful for releasing conditioning and expanding your consciousness and that kind of thing. So three is are primarily associated in the west with Kundalini Yoga, but they're actually a much broader tradition and a much older tradition than that. And the the kinds of prayers that I studied are from another tradition in India. But yeah, so there's Korea, there's meditation, there's a lot of Montero you can do that really helps with this. There's the sacred rituals. There's all kinds of different aspects to the practice. Yeah.

Unknown:

Got it. Thank you. So that makes sense. And, you know, I'm glad we kind of level set it on consciousness and what it is for humans, because I think one direction I want to also take this as the consciousness of AI and what that looks like. So why don't we kind of now shift forward and first going to find what AI is for all our listeners, and then we'll kind of dive into the meat of this.

Chad Woodford:

Yeah. So this is a good, good question. Because, in a way, it's hard to define, actually. So I'll come at it from a couple different directions. First of all, I think at the most basic level, it's just making computers more like think more like people or behave more like people. And that can be like anything from I don't know learn how to solve problems or are identifying patterns. These are the kinds of things that it can be. But what's interesting about AI is that we've been throwing this phrase around for decades. And it seems to be a moving target in a way. So there's this great quote from Pamela McCormick, who says that AI suffers the perennial fate of losing claim to its acquisitions, which eventually, and inevitably gets pulled inside the frontier, a repeating pattern known as the AI effect, or the odd paradox. Ai brings in a new technology, people become accustomed to this technology, it stops becoming AI at that point, and a newer technology emerges. So it's like, what AI is, is always like just across the horizon. And so that's one way to think about it. But to bring it back down to earth here, AI is basically a collection of different technologies. Right now, the hot one is machine learning and deep learning. And that is allowing computers to be very good at pattern recognition that basically what that is, it's based on this technology called neural networks. And so it's a technique where they try to model a theory about how the brain works and how the mind arises from the brain. So it's this very actually materialist theory that the mind is an epiphenomenon of brain processes. And so if we can just recreate a kind of virtual version of neural networks of the neurons in the brain, then similar intelligence will naturally arise. And so that's where we are with machine learning. Back in the day, like in the 20th century, there was another kind of AI called expert systems, which was more like, they thought that if they could just plug in, like millions of sort of true statements that that would help create intelligence, like, you know, men are mortal. And I don't know, car tires made of rubber, and all these things, you've just put all these things into a giant database. They thought that would be AI, but that didn't work out so well. So anyways, yeah, that's that's kind of the long winded answer to AI. What's interesting to the last thing on that is that AI is really good at sort of like playing chess and playing Go famously at beat the world champion in chess and the world champion and go in the past couple of decades. So it's very good at these like, pure reason, kind of problems, and constrained problems that are games, and that kind of thing. Language is also very accessible. But it's not very good at like walking, you know, seeing the world in certain ways and interacting with the world in certain ways.

Unknown:

Yeah, it's interesting. It sounds to me like AI is great at logical activities, but struggles with a little bit more outside of the box activities. Is that

Chad Woodford:

currently currently, yeah, that's right. Right. And

Unknown:

I'm sure at some point, AI will catch up and be more creative than all of us. But now we have a little bit of a head start. So let's kind of talk about that, where today, people are using chat GBT to send emails, they're using it to come up with marketing campaigns. And I think that's what most people experience of AI in their lives. What is AI beyond that, like, what else is going on with it? And also, what are the reasons that you think we have to be excited about AI? today? Yeah,

Chad Woodford:

I mean, so yeah, chat. GPT is the popular thing right now. And it's, that's part of the generator vi explosion that happened in last fall. So you've got generated text, you've also got generated images. So that can be like Dali, or mid journey, those kinds of things, stable diffusion. And so basically, what's happening there is it's been trained on a very large set of data, like in the case of texts, millions and millions and millions, billions, probably, of documents, and tweets and Internet of Things, has been trained on that to learn how to talk or how people talk and to learn a little bit about some topics. And then it's very good based on sort of regurgitating that kind of what is learned, basically. And so it's doing that through this complicated process that includes a kind of a statistical phenomenon, where it's like, yeah, it's a very detailed kind of thing. But basically, it's generating things that are this simulation of intelligence. So it doesn't actually understand, like, what it's saying, for example, it doesn't like it doesn't understand conceptually, or semantically, like what a sentence means. It just knows that that is a very plausibly sort of realistic sounding thing that someone might say. So that's kind of the state of the art, you know, some people call it an elaborate autocomplete, or something like that. And so that's the current state of AI. What AI researchers want to create is true intelligence, and what they call artificial general intelligence, which is also called super intelligence. And that's the idea that it's a computer. It's an artificial intelligence that could basically reason like a person speak, you know, confidently about any topic, solve problems in any domain. That's kind of the longer term goal. So yeah, so I think we can talk more about that super intelligence thing in a minute, but the reasons I think I'm excited about AI is I think it's going to automate a lot of monotonous work that people do and free us up to do more creative things, or it's going to be more of a collaborator with us. So we can kind of use it alongside of us to create things to solve problems to do kind of mundane work and that kind of thing. So that we can then be freed up to focus on what matters, community, family, being of service, creativity of different kinds. So I think it's I like the fact that it's going to be like a collaborator, I think it's in the short term, it's going to kind of like shift what people do, I think it's gonna like, you know, a lot of white collar workers are going to lose their jobs. We can talk about that, too. But yeah, and I think even like, another positive, that might be seen as a negative is the way that it's going to disrupt technology. So this is a big unknown. But I think in general, there's this idea in technology policy of Schumpeterian destruction or creative destruction, which is this idea that the more that technology, or technology companies fail, the better long term because then new and better things arise, it's kind of this, you know, it's kind of the same idea with Darwinian evolution, or just the way that life works, you know, like things die so that new things can come in. And so like a I think, is going to disrupt, for example, Google search, like, I think Google searches, it's fascinating, because Google's in a tough position where their bread and butter is search and search advertising. But they also see the writing on the wall in the sense that they're gonna have to start replacing traditional search results with just like kind of a GPT, or barred equivalent type thing where you just type in your question, or what you want to know. And it gives you the answer without showing you so much the web results. And so that's going to affect Google, it's going to affect their business model is gonna affect the web and how the web is designed and created and content. And then, of course, on that topic, like, it's going to come in, and it's going to change like Wikipedia, potentially, and the way that content is created, because a lot of the content now on the web will be generated by AI. And so all these things are creating a lot of chaos on the web right now. But I think ultimately, that might lead to something even more interesting.

Unknown:

more interesting, more interesting, because why? Yeah, it's Can you elaborate on that?

Chad Woodford:

I mean, I don't know exactly. But I think there's a lot of challenges. Without I'll admit, there's a lot of challenges, because then we start to get into, like, how do we know what's true? How do we know what's real? There's a lot of misinformation risks with AI generating content. But I think I don't have a specific like, I don't know what's going to happen on the other side of this intense disruption that's happening on the web. And it was a search. But I have to think that something interesting will come out of that. Now, again, it's going to create a lot of disruption in that disruption includes, again, jobs, I think, and so that we can talk about that, because I think that's a whole other thing that we need to grapple with. Yeah,

Unknown:

I mean, for me, I think that's the main reason to be scared of AI. It's the fact that all these white collar jobs will be eliminated. I would think within a year or two based off of what I'm seeing, yeah, maybe it's longer, but for trust reasons, because a lot of humans don't necessarily trust AI yet. I think with anything over time, it becomes the way like, what is that going to look like? Are we headed towards like a Walley scenario where people just have universal socialism support, and they're not working at all, like, what is the future?

Chad Woodford:

Yeah, that's a good question. And I want to say I want to be clear, I'm not like an AI evangelist, I feel it's very complicated. And I'm personally somewhat conflicted about it. I'm not some kind of like, naive person who just thinks that AI is going to be great. And there's nothing to worry about. I think there's a lot of things to worry about. I think those things are not so much. The fact that some super intelligence is going to kill humanity. But it's more of these concerns about jobs and the economy, and what it means for that, and also misinformation. So I think, to answer your question about the jobs and all that stuff, I think, yeah, I mean, the Wally's scenario is, I mean, that is dystopian, obviously, and unappealing. But that actually seems better than some of the alternatives like, because that assumes that we'll actually figure out how to redistribute wealth and support people that I'm not even sure if we can do that. I don't think anybody is sort of grappling so much with the amount of disruption that's coming. I think it's like a tsunami that's coming. And I think some people are sort of aware of it, but not enough people are actively working to address it. So I hope at a basic level that we can achieve, Wally, because that means that we're at least taking care of people but then this gets into this conversation around Universal Basic Income UBI and I don't know a ton about that, but I mean, it seems to me like Like, the conversation around that entirely revolves around your belief about human nature. And so I think that UBI people, they think that they have a very, very optimistic view of human nature, which is to say that if you give people money, if you support them, they won't just be couch potatoes, watching TV all day, that they'll actually go out and be productive and create things just for the fun of it, just for the pure joy of it. And that will result in a whole other like, almost like a new renaissance in a way. So that's, I liked that idea. I'm attracted to the idea. But it's hard to tell based on how people are right now easily, if you look around, a lot of people do watch TV, and they do kind of numb out in different ways. And they aren't just like doing art projects or whatever at home. I think part of that is because the world is too overwhelming. They don't feel supported in certain ways. And so I think they do those things, because it helps them to kind of distract from these feelings. And this gets back to kind of another angle that I could have taken from the previous conversation, which is that I feel like a big challenge that we face today is that the predominant kind of feeling tone of society right now is fear, in different ways. And I think, like a big part of my mission is to try to address that in different ways. I think if we can get people to either learn how to move into fear, or to let go of fear that potentially could change everything. So I think the fear is part of why people are not inherently sort of creative when they have free time. I think there's so much going on there emotionally.

Unknown:

Now, that's interesting. And I think that's a worthy tangent of you said your kind of your core mission is related to fear. So what is it about fear, specifically? And how did that lead you to this realm of AI? Is it because you want to help mitigate the fears of people as we enter this new age?

Chad Woodford:

Yeah, kind of I mean, it's part of my current mission is to comfort and inform people about AI. Because I think there's a lot of fear around AI. I think there's a lot of fear mongering happening around AI, I think, not always intentionally, but I think a lot of people, like, I don't know if you've heard of this guy, but there's this guy, I want to get his name right. You kowski lazier, kowski. I think he's running around sounding alarm bells, saying that we're, you know, if we don't do something immediately, something drastic AI is gonna kill us, you know, and he's been saying this for decades. And people are listening, you know, he had a TED talk, I think, last month where he was saying this, but yeah, there's people going around, you know, saying that are these these transhumanists, who think that the solution to everything is just for all of us to upload our consciousness into machines, and then stop worrying about being people in bodies. But yeah, so my mission is to comfort people and kind of like, try to counteract a lot of that conversation that's happening around, especially super intelligence, but also just in general, with jobs and all that. And because I think the way to change the world in a way is to kind of help people understand that there's nothing to be afraid of ultimately. And the more you can cultivate that feeling, the more happy and powerful and successful you're going to be in your life. And that's a big part, it goes back to yoga, too. I mean, the Bhagavad Gita, this book, one of the most famous books, and yoga, the ultimate message of that book, is that no experience is too big for your soul to handle. And so I just think that if people could have that feeling like that, feeling deep inside, it would really change the world. And beside No, by the way, Oppenheimer, the movie, Oppenheimer was very, very much in love with the Bhagavad Gita. So interesting, a little side note that

Unknown:

you think that led to the way he went about his work, I mean, and for those who are listening, you probably know about the movie that just came out Oppenheimer. He's the gentleman who led the project to create the atomic bomb, which was eventually dropped on Japan. How do you think that impacted his work? Because I think that's, that's fascinating that he was a big fan of this yogic philosophy that in theory is all about. I don't want to put words in your mouth, but in is your the Yoga expert, but hopefully about love about, you know, Universal Consciousness, everything happening for us, not to us. Yeah, he created probably the deadliest weapon of all time. Yeah. So what do you think he actually got out of this book? Well, yeah, that's interesting. I mean, you know, I mentioned that but I don't really know his whole story. I haven't. The movie is based on a biography that's supposed to be quite good. I don't know the timeline there. But I do know that he had a lot of regrets about his role in trading the atomic bomb. And I think they say the Bhagavad Gita can be understood on seven levels. And so I think there's a progression. And I think, for those who don't know, the Bhagavad Gita is basically the story of this warrior, or Juna, who is the book starts out and he's facing this battle where he's supposed to fight his cousins. And he, he just doesn't want to he throws down his weapons. And he says, I can't this is not right. It's not right to commit violence against people that I know and love. And, and Krishna comes to counsel him and teach him yoga, and to teach him basically, that he has to live the life that he's been given and perform the things that he's going to perform from a state of love and compassion and state of yoga. It takes a lot to kind of unpack all that because it's he does pick up his weapons and fight in the battle. And it's a battle that's happening on like a different level. So anyways, I think Oppenheimer maybe took some lessons from that and thought, Well, I'm an expert in this field. So I should serve that role. And you know, the consequences can be what they are. But then I think later in life, he understood the book at a deeper level and had some regrets about his role in that. So. Yeah, right. Right. So I guess on that note, here we are, I guess, 70 years later, give or take, and we have this new technology, artificial intelligence, that is probably it's going to change the world in ways we can't even imagine. Are we headed towards world peace or World War Three right now?

Chad Woodford:

What a loaded question. So this, this brings us into I think, question of ethics, because I think that so I do agree that not enough is being done to shape AI in the right ways. So I think it all depends on how we create it, I think the AI is going to reflect the mindset of the people who are making it. And currently, it's mostly being created by people, engineers, mostly, and product managers who have a certain kind of mindset, but they're not, it's not being informed, I feel enough by, let's say, right brain people. And so you've only got left brain people working on this stuff. And so the technology is going to be left brained, I think if we could, and there's nothing wrong with being left brain, but I think it's a little bit one sided, or it's a little bit like, let's imagine if the whole world or the whole society was just engineers, we wouldn't necessarily want that they have a good role, they have an important role to play. But we want to have other people involved, I think, in the creation of AI, you know, and that includes ethicists, people who are trained and ethics, philosophy, that kind of thing, AI is going to be a reflection of our humanity. And we want it to reflect the highest and best version of humanity and not the past, you know, because part of the problem with machine learning, for example, is that it's trained on essentially, data that reflects the past. And I think we can all agree that maybe we weren't our best selves in the past, however, you know, everyone here for you. So I think it's important to train it from the right mindset to train it on the right things, to help it to understand human values. And to incorporate those things, like a good example that's happening today that I'm inspired by is, there's a company called anthropic, who is a bunch of people from open AI, who's the IGBT company, they went and traded a different AI company, because they wanted to make the whole mission of their company, ethical, humanitarian, in a sense, AI, so they have this alternative AI chatbot, called Claude, and it's been trained, what they are calling a constitution. And so they this thing has a constitution, which is to say that it has like, these basic guiding principles that have been hard coded into it, that include, you know, sort of like human rights and sort of different, you know, different concepts about privacy and all these things that we want to the AI to value. And so I like that project. That's so that's one example of, of how we can sort of shape and guide AI in a different way. One last thing on that is, there is another project or a couple other projects where AI is being trained on wisdom, or like spiritual texts and that kind of thing. And so I'm curious about that, too. I think that I haven't looked into it, but it might be helpful. It might be interesting. I don't know.

Unknown:

Yeah, it's interesting, because just like different people are more spiritual, different people are more left brained, different people are more right brained, it'll be interesting to see if there becomes a universal artificial intelligence that everyone uses. Or if people just gravitate towards the one that aligns with their current mindsets. It's almost like we're like turning towards we have the American AI we have the North Korean AI, we have the Switzerland AI, you know, totally different philosophies. And that's kind of scary, because if everyone's using what they already believe, it's actually feels like it'll just continue to separate us because it'll reinforce what the way people already feel. Yeah, I think one of the biggest challenges we have with doing what I was just talking about it is two things. It's the pressures, I think we feel internationally. So part of the reason that let's say the US government is not regulating AI companies is because they're, I think, partly because they're concerned that if we do that it'll slow us down relative to China or Russia or even to India or something like that. So I think this is one part of the reason that AI companies are just left to kind of self regulate. And then another thing is capitalism, you know, I think these companies within the US even, you know, Microsoft versus Google versus, I guess, Facebook, or meta or whatever, they're all in an arms race to see who can have kinda like you were saying to have the most popular AI. And because of that, I think everyone's rushing, and I think they're not taking enough precautions. Right. Right. So I have a few thoughts on that. Yeah. First of all, it's better to go slower, in my opinion, then, you know, put something into the world that could have serious negative implications. And then the second thing you brought up about capitalism, and I think that's especially relevant to America, like, is capitalism, even sustainable, as all these technologies take away all these white collar jobs? And I mean, and I would like to think that as many jobs will be taken away, jobs will be created, but that might not be the case. Like, is capitalism even possible in the future?

Chad Woodford:

Yeah. That's a great question. And it's funny, because I feel like not too long ago, asking that question could make you sound like, I don't know, like a sort of Pollyanna kind of communist or something, you know, people would criticize you for, I don't know, being a lefty or something. But, but I feel like part of what AI is doing. It's making all these questions kind of crucial and kind of mainstreaming these questions. I mean, as recline the podcaster and New York Times columnist, he's asking this question on his podcast, you know, is, is capitalism viable? Is that part of the problem? And I don't know the answer to that, I think it seems like everyone I know, is talking about how we're in late stage capitalism, and it feels like a thing that could definitely at least use a reboot, or an upgrade or something, because not to get too much into the history and the philosophy. But as some people probably know, capitalism came out of, in a way, like the same thinking that that created materialism. So like a lot of the people like the British philosophers, John Locke, and Thomas Hobbes, and all these people, you know, were the ones who were part of the materialist philosophy movement. And so capitalism kind of came out of that same kind of thinking, you know, and I think the reason I'm saying that is because it feels like we're in a time when all these ideas that arose in the 17th and 18th centuries, are maybe seeing their expiration date or something. So I don't know. I mean, obviously, communism is not the answer, right? It feels like it's funny, I was just reading for for school, I was just reading this thing, where they're talking about how the theme of the 20th century was capitalism versus communism. And it feels like the 21st century is us sort of like transcending that duality. And finding a third way of some kind. Who knows what it is, though?

Unknown:

Right, right. So only time will tell. Yeah. So with that, I think let's dive into super intelligence. And as we've talked about intelligence, what is super intelligence?

Chad Woodford:

Man, this is one of my favorite topics. And actually, I've been working forever it feels like on podcast and YouTube episode about super intelligence. So I love talking about this. It's fascinating, because requires you to take a step back and to ask, like, what is intelligence? And when somebody says, you know, AI is going to become this super intelligence, you know, like Ray Kurzweil, Kurzweil, and people like that are saying this, it's like, what does it mean for something to be far more intelligent than us? Does that mean that they're, they're just better at like math and science? Or they're faster, more efficient at pure reason and problem solving? Or does it mean like, if intelligence evolves to a certain level? Does it become wisdom instead? You know, like, what is intelligence? Exactly? A lot of the fears about super intelligence is that there'll be so smart that they'll hack into some thing, like they'll figure out some kind of like biological phenomenon that we don't understand. They'll trade some kind of like super virus, and then humanity will be dead or the the famous, like, fear that people talk about is like, they'll realize that to maximize their resources to create more of themselves. They'll like mine, all the metals that they need, the precious metals they need and all the precious minerals, and then they'll realize that they need it's kind of like The Matrix. They realize that they need people to be resources and all that, but I don't know that's the case. I think if they were very smart, they might not be so like, I don't know, we're just creating. It feels like we're creating. We've watched too many The dystopian movies and we've, we see this dark side of humanity. And I think we're almost imagining that it would be it would be like that it would be like these kind of violent, mindless things. But I don't know, I think it's hard to tell what a super intelligence would be. And I want to say like, there's a lot of fear that it's going to happen in the next year, two years, 10 years. You know, I think Ray Kurzweil said, by the end of the 20s, or something, but I'm not convinced, because we don't have the technology, I feel. So this goes back to how I was defining AI. Currently, it's machine learning. And machine learning is based on inductive reasoning. And that's only one of three kinds of ways that people think. And so not to get too far into this direction. But basically, the way we think, is, there's three kinds, there's deductive reasoning, where you have a general premise, and then you can apply that to specific circumstances. So for example, all men are mortal. Socrates is a man. Therefore, Socrates is mortal. That's deductive reasoning. Inductive reasoning is just noticing patterns and drawing general conclusions from those patterns. That's machine learning. And so yeah, if you show a computer, a million photos of dogs, they're going to eventually realize that this blobby thing that has hair and two eyes, and whatever is a dog. But that's just one another kind of reasoning. But the third kind of reasoning that AI has not yet addressed is abductive reasoning. And that's where we have these flashes of insights. And we have inspiration. And this is the kind of thinking that happens with inventors and detectives, and those kinds of people who have been spending a lot of time with a problem. And then all of a sudden, one day, the solution emerges. And there's no way to like that we have come across so far to automate that process. And so until I feel this is maybe a controversial opinion, I don't know. But I feel like until we've addressed abductive reasoning, there's not going to be super intelligence, because super intelligence requires an understanding of semantics and understanding of cause and effect requires a map of the world and how the world works. And if you look closely enough at machine learning, and that these machine learning programs like chat, GPT, they're very brittle, like, they don't actually understand what they're saying, they don't actually understand how the world works. And so they're very easy to like to throw off. And so it's like a magic trick, you know, they're, they seem highly intelligent. But then as soon as you present them with some problem or challenge that's outside the realm of what they've been trained on, they they fall apart. So anyways, I think, super intelligence, we might get there, but I don't think we have the technology yet to to make it. And I'm not convinced that when we do make it, I'm not convinced that it will be inherently like evil or something.

Unknown:

Right, right. And that's abductive reasoning. How does that compare to like the human version of intuition or subconscious?

Chad Woodford:

Yeah. Well, that's interesting question, actually. So I think it is related in some sense. And I think once you get into the conversation about intuition in the subconscious, that gets into a whole other series of questions about where thinking happens in the inner person. So I think part of this too, is the thinking is happening, the mind is not strictly located in the brain, in my opinion, I think that the mind is actually spread throughout the whole body, as a field. And I think the brain is the connection between the mind and the consciousness we were talking about earlier. That's a very idealistic philosophy. But I think, intuition and unconscious that Carl Jung might have talked about, are coming from places in that greater mind, or that greater sort of awareness that are coming from somewhere that is not strictly like neurons firing in the brain is how I feel. So that's a whole other conversation about sort of like, where the mind is located and how it arises. Because again, in conventional AI theory is in conventional neuroscience to the the theory is that the mind is arising as an epiphenomenon of the brain. But this is actually the hard problem of consciousness that they this is the kind of what they call it an AI. The hard problem of consciousness is like, how is it created from raw matter? Like, how does consciousness arise? Or how could it arise from matter? And it's called the hard problem because we don't even begin to know the answer to that question. Like, we can create a computer that simulates sort of Access Consciousness is what they call it, which is like, I see a symbol I see a letter and then I process that and then there's an output that's like, what we get what we can simulate, but the experience of having like tasting a strawberry or watching a sunset or listening to music, a song that really moves Do we have no idea what's going on there? I mean, there's a the way they say it in philosophy is like, there is the experience of being Brian, but like, how can we possibly recreate that in a machine?

Unknown:

Right? And that makes me think of like, Dr. Keltner is work on all and how we create feel out of these feelings of like, oh, where it's almost like we have a sixth sense. That's, you can't really bring that back to logic or reason. It's just something that happens and how can aI have that experience? If they're only being programmed through a state of practical logic or, or things that are explainable?

Chad Woodford:

Yes, I Yeah, exactly. Exactly. I think this goes back to how I think that AI is forcing us to face all these questions that we've been kind of putting off for a long time, like, going back to Descartes, again, he just decided basically like, oh, there's something going on with the mind or the the soul or the consciousness, who knows what it is, let's put it aside and just focus on matter. And so we've never really grappled with that division. And I think we are being forced to now I think, it might get to the point where we realize, oh, the reason we can't create AI, super intelligence or conscious AI is because consciousness is what we are. And the brain is just helping us to tap into the thing that we are. And that's the case, then we may never be able to create an AI that's like us. Or if we do it has to be with a different technique. I actually am very interested in questions of quantum computing and AI because Roger Penrose, who's famous for being one of Stephen Hawking's colleagues, he had a theory with this other guy, he was a neuroscientist, that the mind and consciousness is actually created somewhere. And in the quantum quantum mechanics, like if you look deeply enough inside these proteins in the brain, maybe something's happening at the quantum level inside those proteins that's creating it. So maybe quantum computing could get us there. I don't know.

Unknown:

Yeah, that's fascinating. And, you know, it sounds to me like, it will be very challenging to get to the point where we can confidently say that AI is conscious, or the super intelligence as a level of consciousness. Do you see a world where, you know, we're half humans, half robots, and you know, the, the AI is using our consciousness as humans, we have these chips in our arms or something. And you know, you've seen this in movies. But as we're talking about this now, in my head, it's like, it doesn't seem that far fetched because the AI kind of needs our consciousness.

Chad Woodford:

Yeah, that's a great question. You're getting into like, another fascinating topic, which is transhumanism. And there's a whole movement. For those who aren't aware, there's a whole movement primarily based out of Bay Area in San Francisco, of transhumanists, who believe that the future is us merging with machines and augmenting ourselves in different ways with machines or becoming machines in some way. And that's the solution to saving the human race. That's the solution to like longevity and immortality and all this stuff. And that is so fascinating. Because I think, again, it's coming from the materialist worldview, in a sense, but also, yeah, I think so Transhumanism is like sneaking spirituality back in through the backdoor, like, they don't want to talk about spirituality, they don't want to acknowledge these things. But then they're sort of saying things like, well, the mind is just a pattern, a large pattern that we can somehow upload into a machine, and that it can live on beyond that. And it's like, if you really, like if you just change some of the words, they're kind of talking about the soul, like they're kind of bringing the soul back in through, like technoscience. So that is fascinating to me. But people who don't aren't aware should know that, to a large extent Silicon Valley is ruled and run by transhumanists. So Elon Musk, for example, is a transhumanist. And so this, I think, is potentially a problem because a lot of the decisions that are being made in terms of company policies, and what to focus on, are being made based on this idea that we're going to start to merge with machines. And I just want to say, I personally feel like one of the main themes in this conversation about AI and super intelligence and all this, in this idea that we can like, just kind of relinquish our responsibility to solving our big problems to AI is that it diminishes the role of the human. And I think we need to, like somehow rediscover a belief in the potential of people. I think that people can be brilliant and can solve all these problems. I don't think we need to sort of, I don't know, handover our our agency and our responsibility and our potential to machines.

Unknown:

It's interesting, I had a conversation about that yesterday, where it's like, it was on a very simple level, but for example, like ahead of happiness, we have weekly blog posts and It's like, sure I could have artificial intelligence, write my weekly blog posts and get to know my tone by reading past blog posts and save me a few hours a week. But that doesn't mean I can't do it a and b, I actually get a lot out of that process. I enjoy the journey, I enjoy having to synthesize my thoughts, and it forces me to observe my reality in a certain way. Absolutely. There's, it's almost like, sure AI can do all these things. But by letting AI take over all these processes from us, is it really enhancing the human experience? And to that note, is it really enhancing our happiness?

Chad Woodford:

Yeah, yeah, it's a good question. I, I think there might be like a middle way there somehow. Because, for example, like when I create podcast episodes, or YouTube content, I will use AI to, for example, suggest like 10 titles that might work for the episode, and then I'll look at those. And maybe I'll use one, maybe one of those will be inspiration for the ultimate title. But I don't have any problem doing that. And like that, in a sense, I don't know like, yes, it makes you feel a little bit less creative, maybe. But also, I think it just moves like it just allows you to take your creativity to a different direction like you, you can then focus your creativity on some other aspects of what you're making, and not spend too much time on the title. I don't know, I think, right. There are certain places where AI can have a role, I think,

Unknown:

right, it allows you to prioritize where you want to be creative. Yeah, at the end of the day, we are all limited on time. So choose how to use your time wisely. I think so i think so to kind of land this plane. You know, we've talked about a lot of the pros of AI, we've talked about a few of the cons and a few things that are terrifying, right? I don't want our listeners to leave thinking the world is about to end. So what what is what would you say is, you know, to sum it up the main reason for optimism, and the main action that people can be taking to future proof themselves in this AI one. Yeah,

Chad Woodford:

I think a lot of it is still unknown. So but I think the reason for optimism is that there's a lot to say there. But at a basic level, there seems to be if you look back over history, there seems to be an arc to human evolution. And that arc, you know, I'm kind of inspired by a little bit by Martin Luther King, who was actually quoting somebody else. But you know, the arc of the universe bends towards ultimate progress for humanity. You know, this, of course, brings in a little bit of idealism and yogic philosophy in the sense that there's so much more going on than just the things that people are doing. I think that nature has an intelligence. And I think that, you know, nature is, has a project it's working on. And it seems like nature is suffering, because we're in this climate change era. But I think there is a sort of death and rebirth process happening with humanity right now. Sorry, this is a long answer. But But basically, like, there does seem to be, you know, like, there's something deeper going on that we maybe don't understand. So I think in terms of like, people surviving humanity surviving, I think we will, I think it is it might require a lot of it's hard right now, I think this time we're going through is challenging for most people in different ways. So I think the reason to be optimistic, is that that's a very deep kind of philosophical answer. But also, because I think that AI is, like I was saying earlier, it's it's going to help us solve a lot of problems in collaboration with ourselves, it's going to free us up from certain things. And then the only question is like, how do we what do we do about people who are displaced by AI who haven't properly prepared for that displacement? So that gets into like, how can you prepare more specifically for AI? I don't know exactly. But I do think that things you can do include familiarizing yourself with AI, and how to use it. Well, I don't think we should be like, I mean, not everyone needs to do that. But if you have the inclination, if you have the interest, learn how to use these tools, try to stay abreast of what's happening. And to the extent you can like, if you know somebody who's working in AI, or you work in a company that's creating AI, I don't know, I think I would like to see more people try to bring like right brain thinking into the development of AI. And that is to say, like, it's hard because people need to make a living. But I would like to see more people going into humanities and not so many going into STEM, you know, science, technology, engineering. And so, I think we need both. I think we're in a time where we need to have people who are coming from the humanities side. So yeah, I mean, I don't know. So it's, if you're worried about it, learn how to use it and also work on yourself. Like learn how to be happy with from the inside, which is a lot of what you talk about, cultivate happiness from the inside and stop trying to find it through some external thing.

Unknown:

That's beautiful. judo and I really do like the idea that you talked about this earlier in the podcast of like, maybe we are on the precipice of a renaissance, where, you know, because more of AI is going to take over some of the stem type of work in this world. Yeah, you know, it really does give us the space to write and read and draw and, you know, really tap into the creators within all of us, which is an exciting future. Because I think that's, in many ways, a key to feeling alive and feeling happy.

Chad Woodford:

Yeah, totally. Yeah, I mean, I think because like, you know, AI seems creative, but it can only create things based on what people have already created. So it gives us an opportunity to create new thing.

Unknown:

Yeah, I love that. I love that. So for anyone listening, you know, when to put down this podcast? Why don't you go draw? Why don't you go, you know, use the other side of your brain for a little Yeah. 100% is anything get out of this. But alright, so this was awesome. Thank you so much, Chad, I think I learned a ton about AI, which I think just learning about it helps to get rid of the fear. If you don't know anything. It's a scary concept. Because you know that it's changing our lives dramatically. You don't really know how and I think this debunked a lot of a lot of that for me. So I feel educated and good. listeners do as well. I guess one last question before we wrap it up. You know, it's one thing that struck me by just speaking with you, both in the past and today is just how much knowledge you have amassed and how you've taken, you know, all these journeys, you're a lawyer, you are an engineer, or a yogi, you know, you're just have this love for learning and this immense curiosity like, where did this come from a and b, like, who are your role models are like the most inspirational people you've met along that journey?

Chad Woodford:

Yeah. So you're asking where my curiosity came from? Is that or Yeah, I'd love

Unknown:

to know, like, I think a lot of us and I guess, to give some context on where that questions come from, yeah, a lot of us are looking for our thing a lot of us are looking for, you know, the thing that sparks us the thing that makes us feel alive. And I think you're like, I know you're not a millennial, but you know, the modern millennial jumps from job to job to job to job, you've gone on all these routes, trying to find your way and find that thing that makes you happy makes you feel alive. But it seems to me like you've been following your curiosity from before it was cool to follow your curiosity, because you know, yeah, I think the older older generations are used to staying at a job for 40 years and then retiring. Right, and, you know, kind of just that like cookie cutter American dream, right? I think it's cool how you broke off that path at a much earlier point in your life? And like, are there people that helped you with that? That's kind of where my head is.

Chad Woodford:

Yeah, I guess I was kinda like an early prototype of millennial. So I've always been curious. But I think, you know, before I even knew about Joseph Campbell, I was kind of following his advice, which is, I'm sure people have heard this, Joseph Campbell, who is this great mythologist, who really highlighted and kind of created the idea of the hero's journey, or just identified it as a recurring theme and world mythology. He said, Just follow your bliss. And I know a lot of people say that it's like a catchphrase. And you know, it's been kind of like, misinterpreted, I think, too, but what he meant was, he was actually a yogi. So he, he studied yogic philosophy early in his life. And so there's this idea in yoga, that the truth of yourself is, is that your consciousness and the truth, like the nature of consciousness is bliss. And so what he meant by that was that, if you follow your bliss, if you just do, like, follow, like, do whatever is interesting to you, and whatever makes you feel blissful, or whatever makes you feel happy, then you'll have access, like expand your consciousness. And that will like once you're more tapped into unity consciousness from a yogic standpoint, the more things will flow in your life. And the more it's like, the more support you'll get, the more fulfillment you'll get. And so, you know, following your bliss is not this thing. We're like, sort of checked out and unattach from the world, you're you're deeply immersed in the world in a way that's passionate and wanting to be of service and connected to people. So my role models, I was thinking about this. To some extent, they're all like, dead white guys. But yeah, I mean, Joseph Campbell, and then Carl Jung, just because he was like, way ahead of his time. He was thinking about and doing things that were so outside the mainstream, that it required, like an immense act of courage to do this. And I don't know how much people know about Carl Jung, but he was into astrology and synchronicity and all this stuff, like in the early part of the 20th century, and so he was like really pushing the envelope, just doing it like he didn't worry about whether it would be I mean, he did a little bit worried about whether it'd be accepted and he slowly kind of rolled this stuff out. But at the same time, he was a real Trailblazer. And so in the same sense, ROM das I'm sure your listeners know of ROM das Another example of a guy who was totally caught in like the square kind of condition, Western world, he was a Harvard professor. And then he just, you know, went to India and became Ron Das. And then Rick tarnis, one of my professors, other role models that he's, he's pioneered with Stan Grof, this archetypal astrology and archetypal cosmology, and he's done all these things in a way that's very accessible to people, I think. And then lastly, I will say, I'll say Steve Jobs because, you know, as much of a difficult person as he was, and as much anger as he had, he also believed in the potential of humanity. And he was the one who I think identified early on, like, the potential for, like, using technology in a way that elevates the human. I think that was his whole life purpose. And that goes back to our whole conversation, too. So. So yeah, Steve Jobs, I think, was a real visionary. And he was also a person who was in love with India. And, you know, he said that doing psychedelics really was what created his entire vision and mission. So he was very like early in a lot of these things.

Unknown:

I love that. I love that and I waiting for the day Chad, where you know, your name is in that category. Now. There's Carl jaag. That Steve Jobs there's rom Das. And there's Chad Woodford, I don't know about. I don't know. We'll see. We'll see how your hero's journey unfolds. But this was a lot of fun. Chad, thank you for all these insights. Thank you. I know that I'm gonna keep on reaching out to you. As the world unfolds and unfolds, and I'm either of my fears or my excitement. I'm gonna be like, can you confirm this, Chad?

Chad Woodford:

Yeah, happy to do it.

Unknown:

I appreciate that. So I'm sure our listeners are just as intrigued if they want to follow your journey or follow your YouTube channel or whatever else. Where can they find you? Yeah, so on Instagram and threads. I'm cosmic wit, cosmic wit. And then my website is cosmic dot diamonds. And the podcast is cosmic intelligence. And I guess you can still find me on Twitter or x or whatever. It's called now at CHD. But yeah, that's my address is beautiful. So we will get those all linked into the show notes. Chad, this was a blast. Thank you so much. It's much fun. We'll have to sync up again soon. Yeah, definitely. Yeah. And then thanks for spreading some knowledge and happiness with our audience. Talk to you later.

Chad Woodford:

It's my passion. Thanks, Brian. Take care.

Intro
The Interview

Podcasts we love