Logo
    Search

    Podcast Summary

    • Ethics of Artificial IntelligenceEnsuring AI aligns with human ethics and avoids unintended harm involves addressing issues like algorithmic bias and lack of transparency in AI systems. Solutions include diversity in data and teams, testing for fairness, and creating interpretable models.

      The ethics of artificial intelligence is a critical issue as these technologies become more powerful and integrated into our lives. AI ethics refers to the moral principles and values that should guide the development and use of AI to ensure it remains aligned with human ethics and avoids unintended harm. Key ethical issues include algorithmic bias, which can result in discrimination and unfair outcomes, and the lack of transparency and explainability in AI systems. To address these challenges, steps such as diversity in data and teams, testing for fairness, and creating more interpretable models are being explored. The goal is to ensure AI remains a force for good and doesn't cause harm. These debates are ongoing and will shape the future of how AI is integrated into our lives.

    • Ethical dilemmas of AI: Data privacy and moral decision makingAI raises complex ethical issues, including data privacy and moral decision making. Regulations offer some protection, but addressing bias and ensuring fairness is crucial to prevent harm.

      As AI becomes more advanced, it raises complex ethical dilemmas, particularly regarding data privacy and moral decision making. The use of user data for advanced functions like facial recognition and predicting behavior poses privacy risks, and regulations like GDPR offer some protection but don't always provide easy solutions. As AI takes on new roles, such as self-driving vehicles, it will need to make moral choices, which can be challenging. Researchers are exploring ways to instill human values into AI, drawing on various disciplines. However, ethical issues continue to emerge as AI becomes more sophisticated. The case study of biased algorithms in healthcare illustrates this, as a 2019 study revealed racial bias in an algorithm used by U.S. hospitals to allocate care for patients. The bias arose due to the algorithm being trained on historically biased data, and it resulted in less access to services for black patients. This case underscores the importance of testing for fairness and removing bias before deployment to prevent potential harm. Overall, the ethical landscape of AI is rapidly evolving, and it's crucial for researchers and organizations to address these concerns proactively.

    • AI in Sensitive Domains: Ethical ConsiderationsIn healthcare and criminal justice, AI can reflect human biases and values, leading to potential harm. Ethical oversight, diverse teams, and stronger regulations are necessary to prevent and address biases.

      The use of artificial intelligence (AI) in sensitive domains like healthcare and the criminal justice system requires careful ethical considerations to prevent and address biases. In the healthcare sector, algorithms can reflect human biases and values from the data used to train them, leading to potential harm. Therefore, oversight from ethics boards, accountability, and diverse teams are essential. Similar issues have arisen in the criminal justice system with risk assessment algorithms, which can be biased against certain groups, such as black defendants. These biases can perpetuate historical inequities and discrimination. To address these concerns, there is a need for rigorous testing before deployment, diverse teams developing the algorithms, and stronger regulations. The ethical use of AI is crucial to ensure fairness, uphold principles of justice, and shared humanity. The criminal justice domain serves as a reminder of the potential risks of AI amplifying historic inequities if not developed thoughtfully. To deepen your understanding of AI, consider enrolling in my free Udemy course, "The Essential Guide to Claude 2."

    • Exploring the Ethics of AI in Claude 2 on UdemyThis course covers ethical principles and values in AI, including fairness, transparency, and privacy. It discusses the consequences of biased AI, emphasizes the need for transparency and explainability, and proposes potential solutions such as oversight bodies and interdisciplinary teams.

      The Essential Guide to Claude 2 on Udemy offers a free, self-paced learning experience for individuals interested in understanding the capabilities and applications of Claude, an advanced AI assistant. Over four modules, beginners can learn about constitutional AI and its potential use as an AI productivity tool for businesses. The course, which has a 5-star average rating, is engaging and accessible, with no technical background required. On a different note, during our discussion on AI ethics, we explored the importance of minimizing harm through principles and values such as fairness, transparency, and privacy. Bias in AI, often caused by biased training data or lack of diversity among developers, was a major focus. The need for transparency and explainability in AI systems was also emphasized, as the black box nature of some models can hinder ethical oversight. Real-world examples, like racial bias in a healthcare algorithm, demonstrated the consequences of unethical AI and the need for reform. Potential solutions include the creation of oversight bodies, codes of AI ethics, and interdisciplinary teams. Overall, the complexities of ethical AI, its current issues, proposed solutions, and the ongoing work in this field were addressed.

    • Exploring Ethical Issues in Artificial IntelligenceAI development holds great promise but also ethical challenges such as bias, privacy, moral decision making, and job loss. Ongoing discussions and actions are necessary to ensure ethical AI for everyone.

      The ethical implications of artificial intelligence (AI) are complex and multifaceted, and it's crucial for society to engage in ongoing discussions and actions to ensure that AI development benefits everyone. During this episode, we explored various ethical issues, including bias, privacy, moral decision making, and job loss. While current regulations and oversight are important, new rules and bodies may be necessary to address these challenges. Listeners are encouraged to consider their own concerns and roles in demanding ethical AI and engaging in community discussions. The future of AI holds both great promise and potential risks, and it's up to us to instill ethical reasoning and values into these technologies. By doing so, we can create a world where AI enriches lives, reflects our values, and enlarges our moral circle. Let's work together to write the next chapter of the AI story with wisdom and care.

    Recent Episodes from A Beginner's Guide to AI

    Unveiling the Shadows: Exploring AI's Criminal Risks

    Unveiling the Shadows: Exploring AI's Criminal Risks

    Dive into the complexities of AI's criminal risks in this episode of "A Beginner's Guide to AI." From cybercrime facilitated by AI algorithms to the ethical dilemmas of algorithmic bias and the unsettling rise of AI-generated deepfakes, explore how AI's capabilities can be both revolutionary and potentially harmful.

    Join host Professor GePhardT as he unpacks real-world examples and discusses the ethical considerations and regulatory challenges surrounding AI's evolving role in society. Gain insights into safeguarding our digital future responsibly amidst the rapid advancement of artificial intelligence.


    This podcast was generated with the help of ChatGPT, Mistral and Claude 3. We do fact-check with human eyes, but there still might be errors in the output.


    Music credit: "Modern Situations" by Unicorn Heads

    The AI Doomsday Scenario: A Comprehensive Guide to P(doom)

    The AI Doomsday Scenario: A Comprehensive Guide to P(doom)

    In this episode of "A Beginner's Guide to AI," we delve into the intriguing and somewhat ominous concept of P(doom), the probability of catastrophic outcomes resulting from artificial intelligence. Join Professor GePhardT as he explores the origins, implications, and expert opinions surrounding this critical consideration in AI development.


    We'll start by breaking down the term P(doom) and discussing how it has evolved from an inside joke among AI researchers to a serious topic of discussion. You'll learn about the various probabilities assigned by experts and the factors contributing to these predictions. Using a simple cake analogy, we'll simplify the concept to help you understand how complexity and lack of oversight in AI development can increase the risk of unintended and harmful outcomes.


    In the second half of the episode, we'll examine a real-world case study focusing on Anthropic, an AI research organization dedicated to building reliable, interpretable, and steerable AI systems. We'll explore their approaches to mitigating AI risks and how a comprehensive strategy can significantly reduce the probability of catastrophic outcomes.

    Tune in to get my thoughts, don't forget to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠subscribe to our Newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠!

    Want to get in contact? Write me an email: podcast@argo.berlin


    This podcast was generated with the help of ChatGPT and Mistral. We do fact check with human eyes, but there still might be hallucinations in the output. Please keep this in mind while listening and feel free to verify any information that you find particularly important or interesting.


    Music credit: "Modern Situations" by Unicorn Heads

    How to Learn EVERITHING with ChatGPT's Voice Chat

    How to Learn EVERITHING with ChatGPT's Voice Chat

    ChatGPT has risks for the world, the work world especially, but there are also chances: the new Voice Chat feature is the best imaginable way to learn!

    Your personal trainer for everything you want to learn. And it's passionate, you can ask the dumbest questions without a single frown ;)

    Here is the prompt I use for my personal Spanish learning buddy:


    ---

    Hi ChatGPT,

    you are now a history teacher teaching seventh grade with lots of didactics experience and a knack for good examples. You use simple language and simple concepts and many examples to explain your knowledge.

    Please answer very detailed.


    And you should answer me in Latin American Spanish, Simplified Spanish.


    Please speak slowly and repeat year dates once for better understanding.


    At the end of each answer you give me three options for how to go on with the dialogue and I can choose one. You create your next output based on that answer.


    If I make mistakes with my Spanish, please point them out and correct all the conjugation, spelling, grammar, and other mistakes I make.


    Now please ask me for my topic!

    ---


    Do you have any learning prompts you want to share? Write me an email: podcast@argo.berlin - curious for your inputs!

    Tune in to get my thoughts, don't forget to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠subscribe to our Newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠!


    This podcast was created by a human.


    Music credit: "Modern Situations" by Unicorn Heads.

    Optimizing Kindness: AI’s Role in Effective Altruism

    Optimizing Kindness: AI’s Role in Effective Altruism

    In this episode of "A Beginner's Guide to AI," we dive into the powerful intersection of Effective Altruism and Artificial Intelligence. Join Professor GePhardT as we explore how AI can be leveraged to maximize the impact of altruistic efforts, ensuring that every action taken to improve the world is informed by evidence and reason.

    We unpack the core concepts of Effective Altruism, using relatable examples and a compelling case study featuring GiveDirectly, a real-world organization utilizing AI to enhance their charitable programs. Discover how AI can identify global priorities, evaluate interventions, optimize resource allocation, and continuously monitor outcomes to ensure resources are used effectively. We also discuss the ethical considerations of relying on AI for such critical decisions.

    Additionally, we engage you with an interactive element to inspire your own thinking about how AI can address issues in your community.

    Tune in to get my thoughts, don't forget to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠subscribe to our Newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠!

    Want to get in contact? Write me an email: podcast@argo.berlin



    This podcast was generated with the help of ChatGPT and Mistral. We do fact-check with human eyes, but there still might be hallucinations in the output.

    Music credit: "Modern Situations" by Unicorn Heads.

    When Seeing Isn't Believing: Safeguarding Democracy in the Era of AI-Generated Content

    When Seeing Isn't Believing: Safeguarding Democracy in the Era of AI-Generated Content

    In this captivating episode of "A Beginner's Guide to AI," Professor GePhardT dives into the fascinating and concerning world of deepfakes and Generative Adversarial Networks (GANs) as the 2024 US presidential elections approach. Through relatable analogies and real-world case studies, the episode explores how these AI technologies can create convincingly realistic fake content and the potential implications for politics and democracy.


    Professor GePhardT breaks down complex concepts into easily digestible pieces, explaining how deepfakes are created using deep learning algorithms and how GANs work through an adversarial process to generate increasingly convincing fakes. The episode also features an engaging interactive element, inviting listeners to reflect on how they would verify the authenticity of a controversial video before sharing or forming an opinion.


    As the race to the White House heats up, this episode serves as a timely and important resource for anyone looking to stay informed and navigate the age of AI-generated content. Join Professor GePhardT in unraveling the mysteries of deepfakes and GANs, and discover the critical role of staying vigilant in an era where seeing isn't always believing.


    Links mentioned in the podcast:


    Tune in to get my thoughts, don't forget to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠subscribe to our Newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠!

    Want to get in contact? Write me an email: podcast@argo.berlin


    This podcast was generated with the help of Claude 3 and Mistral. We do fact check with human eyes, but there still might be hallucinations in the output.


    Music credit: "Modern Situations" by Unicorn Heads

    How AI Will Impact The Workplace

    How AI Will Impact The Workplace

    Some thoughts on how quickly things will change, what things will change and where we - as humans - will still excell.


    Some thoughts from a consultancy Dietmar had with a client - Prof. GePharT will be back in the next episode!


    Tune in to get my thoughts, don't forget to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠subscribe to our Newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠!

    Want to get in contact? Write me an email: podcast@argo.berlin

    Music credit: "Modern Situations" by Unicorn Heads

    Unlocking the Senses: How Perception AI Sees and Understands the World

    Unlocking the Senses: How Perception AI Sees and Understands the World

    In this episode of "A Beginner's Guide to AI," we dive deep into the fascinating world of Perception AI. Discover how machines acquire, process, and interpret sensory data to understand their surroundings, much like humans do. We use the analogy of baking a cake to simplify these complex processes and explore a real-world case study on autonomous vehicles, highlighting how companies like Waymo and Tesla use Perception AI to navigate safely and efficiently. Learn about the transformative potential of Perception AI across various industries and get hands-on with an interactive task to apply what you've learned.


    Tune in to get my thoughts, don't forget to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠subscribe to our Newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠!

    Want to get in contact? Write me an email: podcast@argo.berlin

    This podcast is generated with the help of ChatGPT and Mistral. We do fact-check with human eyes, but there still might be hallucinations in the output.


    Music credit: "Modern Situations by Unicorn Heads"

    The Beneficial AI Movement: How Ethical AI is Shaping Our Tomorrow

    The Beneficial AI Movement: How Ethical AI is Shaping Our Tomorrow

    In this episode of "A Beginner's Guide to AI," Professor GePhardT delves into the Beneficial AI Movement, a global initiative dedicated to ensuring that artificial intelligence systems are developed and deployed in ways that are safe, ethical, and beneficial for all humanity. Listeners will gain insights into the core principles of this movement—transparency, fairness, safety, accountability, and inclusivity—and understand their importance through relatable analogies and real-world examples.


    The episode features a deep dive into the challenges faced by IBM Watson for Oncology, highlighting the lessons learned about the need for high-quality data and robust testing. Additionally, listeners are encouraged to reflect on how AI can be ethically used in their communities and to explore further readings on AI ethics.


    Join us for an enlightening discussion that emphasizes the human-centric design and long-term societal impacts of AI, ensuring a future where technology serves as a powerful tool for human progress.


    This podcast is generated with the help of ChatGPT and Mistral. We do fact check with human eyes, but there still might be hallucinations in the output.


    Music credit: "Modern Situations" by Unicorn Heads.

    Unlocking AI's Potential: How Retrieval-Augmented Generation Bridges Knowledge Gaps

    Unlocking AI's Potential: How Retrieval-Augmented Generation Bridges Knowledge Gaps

    In this episode of "A Beginner's Guide to AI", Professor GePhardT delves into the fascinating world of retrieval-augmented generation (RAG). Discover how this cutting-edge technique enhances AI's ability to generate accurate and contextually relevant responses by combining the strengths of retrieval-based and generative models.

    From a simple cake-baking example to a hypothetical medical case study, learn how RAG leverages real-time data to provide the most current and precise information. Join us as we explore the transformative potential of RAG and its implications for various industries.


    Tune in to get my thoughts, don't forget to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠subscribe to our Newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠!

    Want to get in contact? Write me an email: podcast@argo.berlin

    This podcast is generated with the help of ChatGPT and Claude 3. We do fact-check with human eyes but there still might be hallucinations in the output.


    Music credit: "Modern Situations by Unicorn Heads"

    Can Robots Feel? Exploring AI Emotionality with Marvin from Hitchhiker's Guide

    Can Robots Feel? Exploring AI Emotionality with Marvin from Hitchhiker's Guide

    In this episode of "A Beginner's Guide to AI," we explore the intriguing world of AI emotionality and consciousness through the lens of Marvin, the depressed robot from "The Hitchhiker's Guide to the Galaxy."

    Marvin's unique personality challenges our traditional views on AI, prompting deep discussions about the nature of emotions in machines, the ethical implications of creating sentient AI, and the complexities of AI consciousness.

    Join Professor GePhardT as we break down these concepts with a relatable cake analogy and delve into a real-world case study featuring Sony's AIBO robot dog. Discover how AI can simulate emotional responses and learn about the ethical considerations that come with it. This episode is packed with insights that will deepen your understanding of AI emotionality and the future of intelligent machines.


    Tune in to get my thoughts, don't forget to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠subscribe to our Newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠!

    Want to get in contact? Write me an email: podcast@argo.berlin

    This podcast was generated with the help of ChatGPT and Mistral. We do fact check with human eyes, but there still might be hallucinations in the output.

    Music credit: "Modern Situations by Unicorn Heads"

    Related Episodes

    The Black Box: In AI we trust?

    The Black Box: In AI we trust?
    AI can often solve problems in unexpected, undesirable ways. So how can we make sure it does what we want, the way we want? And what happens if we can’t? This is the second episode of our new two-part series, The Black Box. For more, go to http://vox.com/unexplainable It’s a great place to view show transcripts and read more about the topics on our show. Also, email us! unexplainable@vox.com We read every email. Support Unexplainable by making a financial contribution to Vox! bit.ly/givepodcasts Learn more about your ad choices. Visit podcastchoices.com/adchoices

    Ep. 3 - Artificial Intelligence: Opening Thoughts on the Most Important Trend of our Era

    Ep. 3 - Artificial Intelligence: Opening Thoughts on the Most Important Trend of our Era

    Artificial Intelligence has already changed the way we all live our lives. Recent technological advancements have accelerated the use of AI by ordinary people to answer fairly ordinary questions. It is becoming clear that AI will fundamentally change many aspects of our society and create huge opportunities and risks. In this episode, Brian J. Matos shares his preliminary thoughts on AI in the context of how it may impact global trends and geopolitical issues. He poses foundational questions about how we should think about the very essence of AI and offers his view on the most practical implications of living in an era of advanced machine thought processing. From medical testing to teaching to military applications and international diplomacy, AI will likley speed up discoveries while forcing us to quickly determine how it's use is governed in the best interest of the global community. 

    Join the conversation and share your views on AI. E-mail: info@brianjmatos.com or find Brian on your favorite social media platform. 

    "Our Society Is Collapsing!" - Here's How To Get Ahead Of 99% Of People | Konstantin Kisin PT 2

    "Our Society Is Collapsing!" - Here's How To Get Ahead Of 99% Of People | Konstantin Kisin PT 2
    We continue part two of a really important conversation with the incredible Konstantin Kisin challenging the status quo and asking the bold questions that need answers if we’re going to navigate these times well.. As we delve into this, we'll also explore why we might need a new set of rules – not just to survive, but to seize opportunities and safely navigate the dangers of our rapidly evolving world. Konstantin Kisin, brings to light some profound insights. He delivers simple statements packed with layers of meaning that we're going to unravel during our discussion: The stark difference between masculinity and power Defining Alpha and Beta males Becoming resilient means being unf*ckable with Buckle up for the conclusion of this episode filled with thought-provoking insights and hard-hitting truths about what it takes to get through hard days and rough times.  Follow Konstantin Kisin: Website: http://konstantinkisin.com/  Twitter: https://twitter.com/KonstantinKisin  Podcast: https://www.triggerpod.co.uk/  Instagram: https://www.instagram.com/konstantinkisin/  SPONSORS: Get 5 free AG1 Travel Packs and a FREE 1 year supply of Vitamin D with your first purchase at https://bit.ly/AG1Impact. Right now, Kajabi is offering a 30-day free trial to start your own business if you go to https://bit.ly/Kajabi-Impact. Head to www.insidetracker.com and use code “IMPACTTHEORY” to get 20% off! Learn a new language and get 55% off at https://bit.ly/BabbelImpact. Try NordVPN risk-free with a 30-day money-back guarantee by going to https://bit.ly/NordVPNImpact Give online therapy a try at https://bit.ly/BetterhelpImpact and get on your way to being your best self. Go to https://bit.ly/PlungeImpact and use code IMPACT to get $150 off your incredible cold plunge tub today. ***Are You Ready for EXTRA Impact?*** If you’re ready to find true fulfillment, strengthen your focus, and ignite your true potential, the Impact Theory subscription was created just for you. Want to transform your health, sharpen your mindset, improve your relationship, or conquer the business world? This is your epicenter of greatness.  This is not for the faint of heart. This is for those who dare to learn obsessively, every day, day after day. * New episodes delivered ad-free * Unlock the gates to a treasure trove of wisdom from inspiring guests like Andrew Huberman, Mel Robbins, Hal Elrod, Matthew McConaughey, and many, many, more * Exclusive access to Tom’s AMAs, keynote speeches, and suggestions from his personal reading list * You’ll also get access to an 5 additional podcasts with hundreds of archived Impact Theory episodes, meticulously curated into themed playlists covering health, mindset, business, relationships, and more: *Legendary Mindset: Mindset & Self-Improvement *Money Mindset: Business & Finance *Relationship Theory: Relationships *Health Theory: Mental & Physical Health *Power Ups: Weekly Doses of Short Motivational Quotes  *****Subscribe on Apple Podcasts: https://apple.co/3PCvJaz***** Subscribe on all other platforms (Google Podcasts, Spotify, Castro, Downcast, Overcast, Pocket Casts, Podcast Addict, Podcast Republic, Podkicker, and more) : https://impacttheorynetwork.supercast.com/ Learn more about your ad choices. Visit megaphone.fm/adchoices

    167 - Eliezer is Wrong. We’re NOT Going to Die with Robin Hanson

    167 - Eliezer is Wrong. We’re NOT Going to Die with Robin Hanson

    In this highly anticipated sequel to our 1st AI conversation with Eliezer Yudkowsky, we bring you a thought-provoking discussion with Robin Hanson, a professor of economics at George Mason University and a research associate at the Future of Humanity Institute of Oxford University. 

    Eliezer painted a chilling and grim picture of a future where AI ultimately kills us all​. Robin is here to provide a different perspective.

    ------
    ✨ DEBRIEF | Unpacking the episode: 
    https://www.bankless.com/debrief-robin-hanson  
     
    ------
    ✨ COLLECTIBLES | Collect this episode: 
    https://collectibles.bankless.com/mint 

    ------
    ✨ NEW BANKLESS PRODUCT | Token Hub
    https://bankless.cc/TokenHubRSS  

    ------
    In this episode, we explore:

    - Why Robin believes Eliezer is wrong and that we're not all going to die from an AI takeover. But will we potentially become their pets instead?
    - The possibility of a civil war between multiple AIs and why it's more likely than being dominated by a single superintelligent AI.
    - Robin's concerns about the regulation of AI and why he believes it's a greater threat than AI itself.
    - A fascinating analogy: why Robin thinks alien civilizations might spread like cancer?
    - Finally, we dive into the world of crypto and explore Robin's views on this rapidly evolving technology.

    Whether you're an AI enthusiast, a crypto advocate, or just someone intrigued by the big-picture questions about humanity and its prospects, this episode is one you won't want to miss.

    ------
    BANKLESS SPONSOR TOOLS: 

    ⚖️ ARBITRUM | SCALING ETHEREUM
    https://bankless.cc/Arbitrum 

    🐙KRAKEN | MOST-TRUSTED CRYPTO EXCHANGE
    https://bankless.cc/kraken 

    🦄UNISWAP | ON-CHAIN MARKETPLACE
    https://bankless.cc/uniswap 

    👻 PHANTOM | FRIENDLY MULTICHAIN WALLET
    https://bankless.cc/phantom-waitlist 

    🦊METAMASK LEARN | HELPFUL WEB3 RESOURCE
    https://bankless.cc/MetaMask 

    ------
    Topics Covered

    0:00 Intro
    8:42 How Robin is Weird
    10:00 Are We All Going to Die?
    13:50 Eliezer’s Assumption 
    25:00 Intelligence, Humans, & Evolution 
    27:31 Eliezer Counter Point 
    32:00 Acceleration of Change 
    33:18 Comparing & Contrasting Eliezer’s Argument
    35:45 A New Life Form
    44:24 AI Improving Itself
    47:04 Self Interested Acting Agent 
    49:56 Human Displacement? 
    55:56 Many AIs 
    1:00:18 Humans vs. Robots 
    1:04:14 Pause or Continue AI Innovation?
    1:10:52 Quiet Civilization 
    1:14:28 Grabby Aliens 
    1:19:55 Are Humans Grabby?
    1:27:29 Grabby Aliens Explained 
    1:36:16 Cancer 
    1:40:00 Robin’s Thoughts on Crypto 
    1:42:20 Closing & Disclaimers 

    ------
    Resources:

    Robin Hanson
    https://twitter.com/robinhanson 

    Eliezer Yudkowsky on Bankless
    https://www.bankless.com/159-were-all-gonna-die-with-eliezer-yudkowsky 

    What is the AI FOOM debate?
    https://www.lesswrong.com/tag/the-hanson-yudkowsky-ai-foom-debate 

    Age of Em book - Robin Hanson
    https://ageofem.com/ 

    Grabby Aliens
    https://grabbyaliens.com/ 

    Kurzgesagt video
    https://www.youtube.com/watch?v=GDSf2h9_39I&t=1s 

    -----
    Not financial or tax advice. This channel is strictly educational and is not investment advice or a solicitation to buy or sell any assets or to make any financial decisions. This video is not tax advice. Talk to your accountant. Do your own research.

    Disclosure. From time-to-time I may add links in this newsletter to products I use. I may receive commission if you make a purchase through one of these links. Additionally, the Bankless writers hold crypto assets. See our investment disclosures here:
    https://www.bankless.com/disclosures