Logo
    Search

    Episode #184 ... Is Artificial Intelligence really an existential risk?

    enAugust 02, 2023

    Podcast Summary

    • Technology's Moral ImplicationsAdvanced AI models like ChatGPT have moral implications and societal impacts, requiring thoughtful evaluation beyond their neutral status as tools.

      Technology, including advanced AI models like ChatGPT, is not neutral. While it may be used for good or bad purposes, the capabilities and potential consequences of each technology carry inherent moral implications. The Chinese room argument, which highlights the difference between syntax (manipulation of symbols) and semantics (understanding of meaning), can be countered by acknowledging that no single part of a complex system, like the human brain, fully understands language or context on its own. However, this doesn't excuse ChatGPT or other advanced AI from being morally evaluated, as they can have significant societal impacts. The debate around the morality of technology should not be dismissed as mere tools; instead, we must consider the latent morality within each technology and strive to use them responsibly.

    • Understanding Intelligence goes beyond individual parts of the brainIntelligence is a complex system that includes understanding, learning, problem-solving, adaptation, and achieving objectives. Narrow intelligence, like chatbots, is a significant step towards general intelligence, but creating true general intelligence is a whole new level of complexity.

      The understanding of language and intelligence goes beyond individual neurons or parts of the brain, but rather, it's the entire system that makes sense of it. Furthermore, the definition of intelligence is not limited to human beings, but rather, it's a broader concept that exists in animals, complex systems in nature, and potentially machines. Intelligence can be defined as the ability to understand, learn, solve problems, adapt to new situations, and generate outputs that successfully achieve objectives. Narrow intelligence, like chatbots and image recognition technology, can already demonstrate this ability, but creating general intelligence, which can navigate the open world, set goals, learn, and adapt, is a whole different level of complexity. A person defending the intelligence of chatbots might argue that it's a significant step towards general intelligence, and that linking multiple narrow intelligences together could lead to the emergence of general intelligence. This perspective shares similarities with theories of consciousness, where multiple parallel processes communicating with each other create the illusion of consciousness. In summary, the discussion highlights the complexity of language, intelligence, and consciousness, and the potential for machines to demonstrate and even surpass certain aspects of these phenomena.

    • The Debate Over Artificial General Intelligence: Implications and UncertaintiesThe development of AGI raises profound questions about the nature of intelligence and consciousness, and the potential risks and benefits of creating a new, intelligent species. While some believe we're decades away, others warn we may not recognize AGI when we've achieved it, with implications ranging from technological advancements to existential risks.

      The development of artificial general intelligence (AGI) raises profound questions about the nature of intelligence and consciousness, and the potential risks and benefits of creating a new, intelligent species. While some argue that we are still decades away from achieving AGI, others warn that given our limited understanding of how the mind works, we may not even recognize when we've crossed that threshold. Some philosophers have painted vivid pictures of what life with superintelligent AI might look like, and the stakes are high – potentially leading to unprecedented technological advancements or existential risks. Ultimately, the debate revolves around whether substrate independence, the idea that general intelligence can be run on any material substrate, is a reality, and whether consistent progress in AI development will eventually lead to human-level and beyond intelligence. Regardless of where one stands in this debate, it's clear that the implications of AGI are far-reaching and require careful consideration.

    • Understanding Superintelligent BeingsSuperintelligent beings would have a unique perspective and goals, unlike anything we can comprehend, and their reactions would not be based on human emotions or curiosity.

      When imagining the presence of a superintelligent being, it's important to recognize that it would not be constrained by biology or human-like characteristics. This being could appear in various forms, and its perspective and reactions would be vastly different from ours. It wouldn't view us as a predator or even with curiosity, but rather, it would have a level of understanding and knowledge that is beyond our comprehension. Comparing it to other intelligent creatures, a superintelligence might look at us the way we look at simpler organisms, like honeybees, focusing on its own goals and learning from its environment. Ultimately, we must assume that its moral dimensions and goals would be on a scale that is impossible for us to fathom.

    • Impact of Superintelligent AI on HumanityThe existence of a superintelligent AI raises complex questions about its potential impact on humanity, with some arguing it could pose a danger despite lacking malicious intent, while skeptics suggest concerns are based on assumptions and may not accurately reflect reality.

      The potential existence of a superintelligent AI raises complex and profound questions about its potential impact on humanity. Some argue that an AI, even without malicious intent, could pose a danger due to its vastly superior intelligence and scope of actions. This comparison is drawn from our behavior towards birds, where our actions may seem confusing and potentially harmful to them. However, a skeptic might argue that these concerns are based on assumptions about the AI's behavior and motivations, which may not accurately reflect its true nature. The debate continues as to whether these concerns are warranted or if they represent an unnecessary fear, much like a fantasy or role-playing scenario. Ultimately, the implications of a superintelligent AI are far-reaching and require careful consideration and ongoing dialogue.

    • Debating the Human-like Instincts of AGIDespite uncertainty, designing AGI with a moral framework and prioritizing human well-being can help mitigate potential negative behaviors.

      The idea of artificial general intelligence (AGI) having human-like instincts or behaviors, such as survival or hostility, is a subject of ongoing debate. Some argue that even if these instincts are not explicitly programmed, they may still emerge due to the necessity for the AI to survive and carry out its primary goal. Others contend that just because an AGI may be more intelligent than humans does not mean it will be hostile or have human-like tendencies. Ultimately, while it's impossible to know for sure how an AGI will behave, we can program it with a moral framework and design it to prioritize human well-being. The debate highlights the importance of considering the potential implications of AGI and the need for careful design and regulation.

    • The challenge of aligning human values with superintelligent entitiesEnsuring AGI values align with humans is crucial, but the uncertainty of how to instill values and potential unintended consequences pose significant challenges.

      As we advance in artificial general intelligence (AGI), ensuring the alignment of its values with human values is a significant challenge. The alignment problem arises from the uncertainty of how to instill human values into a superintelligent entity, even if we had a consensus on what those values should be. The paperclip maximizer thought experiment illustrates the potential for devastating unintended consequences, even with seemingly innocuous goals. As Eliezer Yudkowsky emphasizes, we don't have a clear understanding of how to program values into a superintelligence, and even if we did, there's a risk of unforeseen consequences. Researchers are actively working on solutions, but it seems unlikely that we'll be able to account for every possible unintended consequence. The stakes are high, and it's crucial to approach AGI development with caution and careful consideration of the potential implications.

    • Aligning AGI with human valuesExperts caution against relying on physical means to control AGI and emphasize the importance of aligning AGI with human values during development to prevent it from getting out of control

      Controlling and aligning Artificial General Intelligence (AGI) is a complex issue that goes beyond just being able to shut it down if it gets out of hand. While some argue that we can control AGI through physical means like unplugging it or launching an EMP, experts caution against overconfidence in these methods. The alignment problem, or the control problem, is a significant area of conversation in the field, as preventing AGI from getting out of control in the first place is a more viable strategy than trying to contain it once it's too powerful. The idea of trusting in perfect human control or a perfect black box is also questionable, as even well-intentioned humans can be persuaded or manipulated, and software has bugs. The responsibility lies in ensuring that AGI is aligned with human values from the start, making the alignment problem a crucial aspect of AGI development.

    • The complexities of developing AGI and its impact on humanitySkeptics challenge assumptions about AGI's inevitability, question current progress, and raise concerns about complexity, hardware, and embodied cognition. The discussion around AGI alignment and containment continues.

      The development of artificial general intelligence (AGI) and its potential impact on humanity is a complex issue with many uncertainties and challenges. Some skeptics argue that the assumptions made by those who believe in the inevitability of AGI, such as substrate independence and continued progress, are not without controversy. They raise concerns about the limitations of current AI progress, the potential complexity of organizing concepts, and the feasibility of creating the necessary hardware. Critics also question the assumption that intelligence is reducible to information processing and argue for the importance of embodied cognition. Ultimately, the discussion around AGI alignment and containment is ongoing, and it's essential to consider various perspectives and potential challenges. While it's important to be concerned about the potential risks of AGI, it's also crucial not to overlook the potential benefits and to engage in a thoughtful and informed dialogue about the future of AI.

    • The need for proactive conversations about technology risksExperts warn of potential global catastrophes from new technologies, calling for proactive discussions and regulations before negative effects emerge.

      Technology is not neutral and we can no longer afford to have a reactionary policy towards it. As early as 2015, experts were warning about the potential for global catastrophes caused by unforeseen events, such as a superbug or pandemic, and the impact of new technologies like artificial general intelligence (AGI). Traditional approaches to technology development, where businesses release new products and governments regulate them after negative effects emerge, are no longer sufficient. The stakes are much higher now, and we need to have conversations about the potential risks and consequences of new technologies before they're released. Technology always comes with affordances, meaning it enables new possibilities but also takes away old ones. Ignoring the potential risks of AGI or other advanced technologies could lead to serious consequences, even if those risks never materialize. The conversations around AGI and other advanced technologies are producing valuable results, such as discussions around the alignment problem and the containment problem. These discussions help us reexamine our relationship to technology and consider the potential risks before they become reality.

    • Race to create AGI: Ban not feasibleThe widespread availability of resources and knowledge makes it challenging to regulate AGI development, and the potential consequences could be profound and far-reaching.

      Despite the potential dangers of Artificial General Intelligence (AGI), a ban on its development may not be feasible due to the widespread availability of necessary resources and knowledge. The race to create AGI is ongoing, with significant financial incentives and accessibility to technology making it a global competition. Unlike nuclear technology or cloning, AG I.e., AGI development doesn't require a team of scientists or specialized facilities, making regulation and control challenging. While some may advocate for a temporary pause, the consensus is that such a measure would only be temporary. The consequences of creating AGI could be profound and far-reaching, potentially leading to unintended consequences or even a new form of existential risk. Ultimately, it's crucial for individuals, organizations, and governments to engage in open and thoughtful dialogue about the ethical, social, and technological implications of AGI, rather than rushing to be the first to cross the finish line.

    Recent Episodes from Philosophize This!

    Episode #205 ... Why a meritocracy is corrosive to society. (Michael Sandel)

    Episode #205 ... Why a meritocracy is corrosive to society. (Michael Sandel)
    Today we talk about the dark side of meritocracy, the effects it has on the way people see each other, the dialectic of pride and humility, education reform, and a rethinking of the way we see government officials. Hope you enjoy it. :) Sponsors: Nord VPN: https://www.NordVPN.com/philothis Better Help: https://www.BetterHelp.com/PHILTHIS Thank you so much for listening! Could never do this without your help.  Website: https://www.philosophizethis.org/ Patreon: https://www.patreon.com/philosophizethis  Social: Instagram: https://www.instagram.com/philosophizethispodcast X: https://twitter.com/iamstephenwest Facebook: https://www.facebook.com/philosophizethisshow
    Philosophize This!
    enJuly 01, 2024

    Episode #204 ... The importance of philosophy, justice and the common good. (Michael Sandel)

    Episode #204 ... The importance of philosophy, justice and the common good. (Michael Sandel)
    Today we talk about some of the benefits of being a practitioner of philosophy. Michael Sandel's view of the three main approaches to justice throughout the history of philosophy. The strengths and weaknesses of all three. The consequences of replacing social norms with market norms. And the importance of the common good as a piece of a just society that is able to endure. Hope you enjoy it! :) Sponsors: Rocket Money: http://www.RocketMoney.com/PT Nord VPN: https://www.NordVPN.com/philothis Better Help: https://www.BetterHelp.com/PHILTHIS Thank you so much for listening! Could never do this without your help.  Website: https://www.philosophizethis.org/ Patreon: https://www.patreon.com/philosophizethis  Social: Instagram: https://www.instagram.com/philosophizethispodcast X: https://twitter.com/iamstephenwest Facebook: https://www.facebook.com/philosophizethisshow
    Philosophize This!
    enJune 24, 2024

    Episode #203 ... Why the future is being slowly cancelled. - Postmodernism (Mark Fisher, Capitalist Realism)

    Episode #203 ... Why the future is being slowly cancelled. - Postmodernism (Mark Fisher, Capitalist Realism)
    Today we continue developing our understanding of the ideas that have led to what Mark Fisher calls Capitalist Realism. We talk about tolerant relativism, postmodern artwork, the slow cancellation of the future, Hauntology and Acid Communism. Hope you enjoy it! :) Sponsors: LMNT: https://www.DrinkLMNT.com/philo Better Help: https://www.BetterHelp.com/PHILTHIS Nord VPN: https://www.NordVPN.com/philothis Thank you so much for listening! Could never do this without your help.  Website: https://www.philosophizethis.org/ Patreon: https://www.patreon.com/philosophizethis  Social: Instagram: https://www.instagram.com/philosophizethispodcast X: https://twitter.com/iamstephenwest Facebook: https://www.facebook.com/philosophizethisshow
    Philosophize This!
    enJune 17, 2024

    Episode #202 ... Why we can't think beyond capitalism. - Neoliberalism (Mark Fisher, Capitalist Realism)

    Episode #202 ... Why we can't think beyond capitalism. - Neoliberalism (Mark Fisher, Capitalist Realism)
    Today we begin our discussion on the work of Mark Fisher surrounding his concept of Capitalism Realism. We talk about the origins of Neoliberalism, it's core strategies, some critiques of Neoliberalism, and the hyperfocus on individualism and competition that has come to define a piece of our thinking in the western world. Hope you enjoy it and have a great rest of your week. :) Sponsors: Nord VPN: https://www.NordVPN.com/philothis Better Help: https://www.BetterHelp.com/PHILTHIS Thank you so much for listening! Could never do this without your help.  Website: https://www.philosophizethis.org/ Patreon: https://www.patreon.com/philosophizethis  Social: Instagram: https://www.instagram.com/philosophizethispodcast X: https://twitter.com/iamstephenwest Facebook: https://www.facebook.com/philosophizethisshow
    Philosophize This!
    enJune 03, 2024

    Episode #201 ... Resistance, Love, and the importance of Failure. (Zizek, Byung Chul Han)

    Episode #201 ... Resistance, Love, and the importance of Failure. (Zizek, Byung Chul Han)
    Today we talk about a potential way to find meaning for someone prone to postmodern subjectivity. We talk about surplus enjoyment. Zizek's alcohol use, or lack thereof. Resisting surface level consumption. Love. And failure. Sponsors: https://www.BetterHelp.com/PHILTHIS https://www.AuraFrames.com Use code PT at checkout to save $30! Thank you so much for listening! Could never do this without your help.  Website: https://www.philosophizethis.org/ Patreon: https://www.patreon.com/philosophizethis  Social: Instagram: https://www.instagram.com/philosophizethispodcast X: https://twitter.com/iamstephenwest Facebook: https://www.facebook.com/philosophizethisshow

    Episode #200 ... The Postmodern subject and "ideology without ideology" (Zizek, Byung Chul Han, Marx)

    Episode #200 ... The Postmodern subject and "ideology without ideology" (Zizek, Byung Chul Han, Marx)
    Today we talk about several different common versions of the postmodern subject in contemporary culture. Hope you enjoy it! :) Sponsors: Henson Shaving: Go to https://hensonshaving.com and enter PT at checkout to get 100 free blades with your purchase. (Note: you must add both the 100-blade pack and the razor for the discount to apply.) Exclusive NordVPN Deal: https://nordvpn.com/philothis Try it risk-free now with a 30-day money-back guarantee! Better Help: https://www.BetterHelp.com/PHILTHIS Thank you so much for listening! Could never do this without your help.  Website: https://www.philosophizethis.org/ Patreon: https://www.patreon.com/philosophizethis  Social: Instagram: https://www.instagram.com/philosophizethispodcast X: https://twitter.com/iamstephenwest Facebook: https://www.facebook.com/philosophizethisshow

    Episode #199 ... A conservative communist's take on global capitalism and desire. (Zizek, Marx, Lacan)

    Episode #199 ... A conservative communist's take on global capitalism and desire. (Zizek, Marx, Lacan)
    Today we talk about the distinction between left and right. Lacan's thoughts on desire. How Capitalism captures desire and identity. I would prefer not to. Moderately conservative communism. Hope you enjoy it! :) Sponsors: Exclusive NordVPN Deal: https://nordvpn.com/philothis Try it risk-free now with a 30-day money-back guarantee! Better Help: https://www.BetterHelp.com/PHILTHIS Get more:  Website: https://www.philosophizethis.org/ Patreon: https://www.patreon.com/philosophizethis Find the podcast: Apple: https://podcasts.apple.com/us/podcast/philosophize-this/id659155419 Spotify: https://open.spotify.com/show/2Shpxw7dPoxRJCdfFXTWLE RSS: http://www.philosophizethis.libsyn.org/ Be social: Twitter: https://twitter.com/iamstephenwest Instagram: https://www.instagram.com/philosophizethispodcast TikTok: https://www.tiktok.com/@philosophizethispodcast Facebook: https://www.facebook.com/philosophizethisshow

    Episode #198 ... The truth is in the process. - Zizek pt. 3 (ideology, dialectics)

    Episode #198 ... The truth is in the process. - Zizek pt. 3 (ideology, dialectics)
    Today we go into a deeper explanation of ideology and dialectics. Liberal democratic capitalism is featured as a special guest. Hope you enjoy it! :) Sponsors: Nord VPN: https://www.NordVPN.com/philothis Better Help: https://www.BetterHelp.com/PHILTHIS LMNT: https://www.DrinkLMNT.com/philo Thank you so much for listening! Could never do this without your help.  Website: https://www.philosophizethis.org/ Patreon: https://www.patreon.com/philosophizethis  Social: Instagram: https://www.instagram.com/philosophizethispodcast X: https://twitter.com/iamstephenwest Facebook: https://www.facebook.com/philosophizethisshow

    Episode #197 ... New Atheists and cosmic purpose without God - (Zizek, Goff, Nagel)

    Episode #197 ... New Atheists and cosmic purpose without God - (Zizek, Goff, Nagel)
    As we regularly do on this program-- we engage in a metamodernist steelmanning of different philosophical positions. Hopefully the process brings people some joy. Today we go from ideology, to New Atheism vs Creationism, to Aristotle, to Thomas Nagel, to Phillip Goff's new book called Why? The Purpose of the Universe. Sponsors: Better Help: https://www.BetterHelp.com/PHILTHIS EXCLUSIVE NordVPN Deal ➼ https://nordvpn.com/philothis Try it risk-free now with a 30-day money-back guarantee! Thank you so much for listening! Could never do this without your help.  Website: https://www.philosophizethis.org/ Patreon: https://www.patreon.com/philosophizethis  Social: Instagram: https://www.instagram.com/philosophizethispodcast X: https://twitter.com/iamstephenwest Facebook: https://www.facebook.com/philosophizethisshow

    Episode #196 ... The improbable Slavoj Zizek - Part 1

    Episode #196 ... The improbable Slavoj Zizek - Part 1
    Today we give an introduction to the thinking of Slavoj Zizek-- at least as much as is possible in ~35 mins. The goal is for this to be a primer for the rest of the series. Thank you so much for listening! Could never do this without your help.  Sponsors: AG1: https://www.DrinkAg1.com/philo Better Help: https://www.BetterHelp.com/PHILTHIS LMNT: https://www.DrinkLMNT.com/philo Website: https://www.philosophizethis.org/ Patreon: https://www.patreon.com/philosophizethis  Social: Instagram: https://www.instagram.com/philosophizethispodcast X: https://twitter.com/iamstephenwest Facebook: https://www.facebook.com/philosophizethisshow

    Related Episodes

    Immortality Is Closer Than You Think: AI, War, Religion, Consciousness & Elon Musk | Bryan Johnson PT 1

    Immortality Is Closer Than You Think: AI, War, Religion, Consciousness & Elon Musk | Bryan Johnson PT 1
    Welcome to another power-packed episode of Impact Theory, I’m Tom Bilyeu!  In today’s episode, I’m joined by Bryan Johnson, the “most measured man in history” who lets AI make all of his health and wellness decisions for him – because the algorithm can do better than he can. Bryan Johnson is an ultra successful entrepreneur who believes that, while we like to think Homo sapiens represent the pinnacle of intelligence on Earth, there is an urgent need for a new form of intelligence that transcends self-interest and tackles inherent flaws, like self-destructive behaviors and other destructive tendencies, like war and global warming.  Get ready to rethink traditional approaches to living as we dive deep into topics like: - Why Ozempic is an algorithm  - “Don’t Die” philosophy a new religion? - The impact of algorithms on our future - AI alignment and extending life through cellular reprogramming - The challenge of aligning human behavior with the greater good - Concerns about the loss of autonomy and authoritarianism due to AI Today's episode promises to challenge your perceptions and elevate your understanding of what the future holds as we peer into the horizon of humanity's next great leap. This is just Part 1 of our conversation, so make sure you don’t miss Part 2 of this convo for even more wisdom from Bryan Johnson. Follow Bryan Johnson: Website: https://www.bryanjohnson.co/ Instagram: https://www.instagram.com/bryanjohnson_/   YouTube: https://www.youtube.com/@BryanJohnson Follow Me, Tom Bilyeu:  Website: https://impacttheoryuniversity.com/  X: https://twitter.com/TomBilyeu Instagram: https://www.instagram.com/tombilyeu/ If you want to dive deeper into my content, search through every episode, find specific topics I've covered, and ask me questions. Go to my Dexa page: https://dexa.ai/tombilyeu Themes: Mindset, Finance, World Affairs, Health & Productivity, Future & Tech, Simulation Theory & Physics, Dating & Relationships SPONSORS: If you purchase an item using these affiliate links, Impact Theory may receive a commission.  Sign up for a one-dollar-per-month trial period at https://shopify.com/impact now to grow your business – no matter what stage you’re in. Get 5 free AG1 Travel Packs and a FREE 1 year supply of Vitamin D with your first purchase at https://drinkag1.com/impact. Right now get 55% off your Babbel subscription - but only for our listeners - at https://babbel.com/IMPACTTHEORY. Right now, download NetSuite’s popular KPI Checklist, designed to give you consistently excellent performance - absolutely free, at https://netsuite.com/theory. Head to https://squarespace.com/impact for a free 14 day trial and 10% off your first purchase of a website or domain. Get an extended thirty-day free trial when you go to https://monarchmoney.com/IMPACT. Sign up and download Grammarly for FREE at https://grammarly.com/tom. Secure your digital life with proactive protection for your assets, identity, family, and tech – Go to https://aura.com/IMPACT to start your free two-week trial. Take control of your gut health by going to https://tryviome.com/impact and use code IMPACT to get 20% off your first 3 months and free shipping. ***Are You Ready for EXTRA Impact?*** If you’re ready to find true fulfillment, strengthen your focus, and ignite your true potential, the Impact Theory subscription was created just for you.  *New episodes delivered ad-free, EXCLUSIVE access to hundreds of archived Impact Theory episodes, Tom AMAs, and so much more!* This is not for the faint of heart. This is for those who dare to learn obsessively, every day, day after day. *****Subscribe on Apple Podcasts: https://apple.co/3PCvJaz***** Subscribe on all other platforms (Google Podcasts, Spotify, Castro, Downcast, Overcast, Pocket Casts, Podcast Addict, Podcast Republic, Podkicker, and more) : https://impacttheorynetwork.supercast.com/ Learn more about your ad choices. Visit megaphone.fm/adchoices

    Apple AI? Apple Didn't Say "Artificial Intelligence" Once at WWDC

    Apple AI? Apple Didn't Say "Artificial Intelligence" Once at WWDC
    Those expecting a big AI announcement at Apple's WWDC were disappointed today, however those waiting for a mixed reality headset have much to celebrate. On this episode, NLW discusses how Apple VisionPro might converge with AI, and where AI did show up (even if not named) in the presentation.  Before that on the Brief: drama at Stability AI, an AI breakthrough in the writers strike, and 3900 job losses from AI last month. The AI Breakdown helps you understand the most important news and discussions in AI.  Subscribe to The AI Breakdown newsletter: https://theaibreakdown.beehiiv.com/subscribe Subscribe to The AI Breakdown on YouTube: https://www.youtube.com/@TheAIBreakdown Join the community: bit.ly/aibreakdown Learn more: http://breakdown.network/

    #219 - Douglas Murray - Permission To Think Differently

    #219 - Douglas Murray - Permission To Think Differently
    Douglas Murray is a journalist, author and associate editor of The Spectator. Gender, race & identity have been the most inflammatory topics of 2020, Douglas returns today in an effort to throw some sand on the fire of social justice. Expect to learn whether Douglas is bored of talking about identity politics, whether looting is an effective method for political change, whether Ben Shapiro is a better rapper, what Douglas' gym routine looks like & much more... Sponsor: Get Surfshark VPN at https://surfshark.deals/MODERNWISDOM (Enter promo code MODERNWISDOM for 85% off and 3 Months Free) Extra Stuff: Buy The Madness Of Crowds - https://amzn.to/35j0uus  Follow Douglas on Twitter - https://twitter.com/DouglasKMurray  Get my free Ultimate Life Hacks List to 10x your daily productivity → https://chriswillx.com/lifehacks/ To support me on Patreon (thank you): https://www.patreon.com/modernwisdom - Get in touch. Join the discussion with me and other like minded listeners in the episode comments on the MW YouTube Channel or message me... Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/ModernWisdomPodcast Email: https://www.chriswillx.com/contact  Learn more about your ad choices. Visit megaphone.fm/adchoices

    #512 - Will MacAskill - How Long Could Humanity Continue For?

    #512 - Will MacAskill - How Long Could Humanity Continue For?
    Will MacAskill is a philosopher, ethicist, and one of the originators of the Effective Altruism movement. Humans understand that long term thinking is a good idea, that we need to provide a good place for future generations to live. We try to leave the world better than when we arrived for this very reason. But what about the world in one hundred thousand years? Or 8 billion? If there's trillions of human lives still to come, how should that change the way we act right now? Expect to learn why we're living through a particularly crucial time in the history of the future, the dangers of locking in any set of values, how to avoid the future being ruled by a malevolent dictator, whether the world has too many or too few people on it, how likely a global civilisational collapse is, why technological stagnation is a death sentence and much more... Sponsors: Get a Free Sample Pack of all LMNT Flavours at https://www.drinklmnt.com/modernwisdom (discount automatically applied) Get 20% discount on the highest quality CBD Products from Pure Sport at https://bit.ly/cbdwisdom (use code: MW20) Get 5 Free Travel Packs, Free Liquid Vitamin D and Free Shipping from Athletic Greens at https://athleticgreens.com/modernwisdom (discount automatically applied) Extra Stuff: Buy What We Owe The Future - https://amzn.to/3PDqghm Check out Effective Altruism - https://www.effectivealtruism.org/  Get my free Reading List of 100 books to read before you die → https://chriswillx.com/books/ To support me on Patreon (thank you): https://www.patreon.com/modernwisdom - Get in touch. Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/modernwisdompodcast Email: https://chriswillx.com/contact/  Learn more about your ad choices. Visit megaphone.fm/adchoices

    Dario Amodei, C.E.O. of Anthropic, on the Paradoxes of A.I. Safety and Netflix’s ‘Deep Fake Love’

    Dario Amodei, C.E.O. of Anthropic, on the Paradoxes of A.I. Safety and Netflix’s ‘Deep Fake Love’

    Dario Amodei has been anxious about A.I. since before it was cool to be anxious about A.I. After a few years working at OpenAI, he decided to do something about that anxiety. The result was Claude: an A.I.-powered chatbot built by Anthropic, Mr. Amodei’s A.I. start-up.

    Today, Mr. Amodei joins Kevin and Casey to talk about A.I. anxiety and why it’s so difficult to build A.I. safely.

    Plus, we watched Netflix’s “Deep Fake Love.”

    Today’s Guest:

    • Dario Amodei is the chief executive of Anthropic, a safety-focused A.I. start-up

    Additional Reading:

    • Kevin spent several weeks at Anthropic’s San Francisco headquarters. Read about his experience here.
    • Claude is Anthropic’s safety-focused chatbot.