Logo
    Search

    #332 — Can We Contain Artificial Intelligence?

    enAugust 28, 2023

    Podcast Summary

    • The Urgent Need to Address AI ChallengesMustafa Suleiman, CEO of Inflexion AI, emphasizes the importance of understanding AI risks and opportunities, drawing on his experience in the field. He highlights the need for open dialogue and action to navigate potential challenges like labor disruption, misinformation apocalypse, and regulatory capture.

      Key takeaway from this conversation between Sam Harris and Mustafa Suleiman is the urgency of addressing the challenges posed by rapidly advancing technologies, particularly AI, and Mustafa's unique perspective on the issue based on his background as a pioneer in the field. Mustafa, the CEO of Inflexion AI and author of "The Coming Wave," shares his concerns about the risks and opportunities of AI, drawing on his experience co-founding DeepMind and leading AI product management and policy at Google. He emphasizes the importance of understanding the nature of intelligence, productivity growth, and labor disruption, as well as the potential for digital watermarks, regulatory capture, and the looming possibility of a misinformation apocalypse. Throughout the conversation, Mustafa highlights the need for open dialogue and action to navigate the 21st century's greatest dilemma.

    • From conflict resolution to AI safetyThe founder of DeepMind, inspired by his experiences in conflict resolution, left his consultancy to co-found a company focused on safe and ethical artificial general intelligence.

      The speaker's experiences in conflict resolution inspired him to recognize the importance of the technological revolution happening in his lifetime. He left his consultancy to co-found DeepMind with Demis Hassabis and Shane Legg, who shared his passion for science, technology, and making a positive impact on the world. DeepMind's mission was to build safe and ethical artificial general intelligence (AGI). The speaker's background in conflict resolution influenced his perspective on intelligence, which he saw as the ability to perform well across various environments, emphasizing generality. DeepMind was one of the first companies to focus on AGI, and the speaker's interest in AI safety and risk became a significant aspect of his work. The speaker met the interviewee around 2015 at a conference focusing on AI safety, and they shared common interests in AI and its potential impact on society.

    • DeepMind's Early Breakthroughs in AIDeepMind's focus on deep learning and reinforcement learning led to groundbreaking advancements in AI, attracting top talent and resources, and putting AI back on the map for practical applications.

      The DeepMind team made crucial early bets on deep learning and the combination of deep learning and reinforcement learning, which led to significant advancements in AI. Prior to DeepMind, there was a period of skepticism about the potential of AI due to limited progress. DeepMind's breakthroughs, such as the Atari DQN AI, which learned to play Atari games at human level performance, caught the attention of Larry Page, leading to Google's acquisition of the company in 2014. The acquisition provided DeepMind with the resources to continue its research and development, allowing it to remain a leading player in the AI field. DeepMind's early focus on deep learning and reinforcement learning also attracted top talent in the field, including future OpenAI co-founder Ilya Sutskever, who was a consultant for the company. Overall, DeepMind's achievements helped put AI back on the map and demonstrate its potential for practical applications.

    • Google and DeepMind's Collaboration: Combining Scale and ResearchGoogle's collaboration with DeepMind, driven by the complexity of AI challenges, resulted in breakthroughs like AlphaZero, showcasing the generality and scalability of AI ideas with more compute. The merger of the two entities further solidified this partnership, bringing all major AI research efforts under one roof.

      The collaboration between Google and DeepMind, which began with DeepMind's impressive achievements in gaming AI like AlphaGo and AlphaZero, was driven by the immense complexity of certain AI challenges, such as Go with its 10 to the power of 170 possible configurations. Google's scale and resources allowed for multiple large-scale efforts in parallel, including consolidating open-ended research under Google DeepMind and more focused applied research under Google Research. DeepMind's breakthroughs, which included self-learning algorithms like AlphaZero, demonstrated the generality and scalability of AI ideas with more compute. The merger of Google and DeepMind this year further solidified this collaboration, bringing all major AI research efforts under one roof. These advancements represent a significant shift in the AI field, making it increasingly difficult to ignore the potential of AI.

    • Self-playing algorithms discover new knowledge in complex domainsRecent AI advancements like AlphaZero and AlphaFold have shown self-playing algorithms can surpass human expertise in complex domains, discovering new strategies and knowledge.

      Recent advancements in artificial intelligence, specifically AlphaZero and AlphaFold, have shown that self-playing algorithms can discover new knowledge and strategies in complex domains, surpassing human expertise. These methods, which include deep reinforcement learning and neural networks, can be parallelized and scaled up, making use of traditional computing infrastructure. In the case of AlphaZero, it surpassed the capabilities of AlphaGo in just one day of self-play, and even came up with moves in Go that human experts initially deemed as mistakes but later turned out to be brilliant discoveries. AlphaFold, on the other hand, tackled the long-standing challenge of protein folding, making progress in a hackathon experiment and eventually open-sourcing 200 million protein structures. These breakthroughs demonstrate the potential of these methods to contribute to the expansion of human knowledge and capability.

    • Exploring the Risks of Advanced TechnologiesAdvanced technologies like AlphaFold and large language models offer unprecedented benefits but also pose significant risks, requiring careful alignment and ethical considerations to ensure their safe and beneficial use for all.

      The advancements in technology, such as AlphaFold's protein folding solutions and large language models, have a massive compressive effect, achieving what once took millions of hours in a fraction of the time. However, these advancements also come with significant risks, including misinformation, alignment issues, and the potential for unintended consequences. The author, who has been concerned about these risks since the founding of their company in 2010, emphasizes the need to build and align these technologies safely and ethically for the benefit of everyone. Despite the concerns, the incentives to continue developing these technologies are strong, and a moratorium is not a viable option. Instead, it's crucial to address these risks as we continue to innovate and advance. The author's book explores these issues in greater depth.

    • Focusing on practical risks of Artificial Capable IntelligenceEliezer Yudkowsky advocates for addressing near-term risks of ACI, such as mass misinformation and power amplification, through a modern Turing test and proactive approach.

      While the concept of superintelligent AI and the potential for an intelligence explosion has captured the imagination and concern of many, Nick Bostrom alumnus Eliezer Yudkowsky argues that we should focus more on the near-term, practical risks of artificial capable intelligence (ACI) and the potential for mass misinformation and power amplification. He believes that these technologies are becoming smaller, cheaper, and more capable, and could lead to chaos if left unchecked. Yudkowsky advocates for a modern Turing test that evaluates an AI's capabilities rather than just its ability to communicate, and encourages a proactive approach to addressing the risks of these technologies. He is optimistic about the potential of technology to create value and reduce suffering but acknowledges that it comes with risks and that we must consciously attend to the downsides. The "coming wave" he refers to are general purpose technologies, like electricity, that enable other technologies and have the potential to spread far and wide and get cheaper. Overall, Yudkowsky's message is one of caution and the need for checks and balances as we navigate the development and deployment of artificial capable intelligence.

    • Technological Revolution with Exponential GrowthThis revolution could lead to unprecedented productivity, meritocracy, and cultural, political, and economic changes, but also potential labor disruption.

      We are on the brink of a technological revolution, where intelligence and life are subject to exponential growth, leading to widespread access to advanced tools and resources. This could result in unprecedented productivity and meritocracy, but also potential labor disruption. The speaker argues that this is different from previous technological advancements because we're dealing with a technology that has the potential to truly replace human intelligence. It's important to consider the implications of this shift, as it could lead to significant cultural, political, and economic changes. The challenge will be to navigate these changes and ensure that everyone benefits from this technological advancement.

    • The Impact of AI on White Collar JobsAI's advancement temporarily augments human intelligence, but we must consider long-term implications and potential downsides, and have open conversations about managing the consequences effectively.

      The advancement of AI technology and automation is not a surprise anymore, and it's primarily targeting higher cognitive, white collar jobs. While some argue that this will lead to more wealth creation and new opportunities, others are skeptical. The trajectory of AI development shows that it's only temporarily augmenting human intelligence, and we need to consider the long-term implications of powerful, cheap, and widely proliferated systems. The incentives for nations, scientists, and businesses to explore and invent are strong, but we must also address the potential downsides. It's essential to have open conversations about these issues, rather than being averse to pessimistic perspectives. The consequences of AI's spread are massive, and we need to think about how to manage them effectively.

    • Predicting and preparing for the future of technologyIt's essential to anticipate technology's future, work towards containment, and ensure open-source development for accountability and safety.

      It's crucial to make predictions about the future of technology, even if they might be wrong, and work towards mitigation and adaptation. The concept of containment, which is the ability to keep technologies within human control, is essential to prevent catastrophic outcomes. However, it seems the digital genie is already out of the bottle, and powerful models are being developed and used in the wild. It's not too late, though. The more these models are open-source and scrutinized, the better we can hold them accountable and ensure they remain safe. Sam Altman's philosophy of open development aligns with this approach, allowing for learning and progress as we get closer to building something that requires safety measures. We must remain humble about the practical realities of technology's emergence.

    • The idea of hiding a powerful AI is outdatedAI capabilities are rapidly becoming open source, raising questions about accountability and the exponential increase in compute and model size

      The idea of creating a powerful AI and keeping it hidden in a box is a naive and outdated concept. Once an idea or technology is invented, it spreads rapidly, especially in our digitized world. Open source models like GPT-3, which were cutting-edge just a few years ago, are now being made available to the public at a much lower cost. This trend is expected to continue, and the capabilities of current frontier models will likely be open source within the next few years. However, this raises important questions about how we hold accountable those who develop these mega models, whether they are open source or closed. The exponential increase in compute and size of models is also alarming, with a decade's worth of progress representing a tenfold increase each year for the last ten years, far surpassing Moore's Law. These developments bring great potential, but also require serious consideration of their implications and how to ensure they are used responsibly.

    Recent Episodes from Making Sense with Sam Harris

    #372 — Life & Work

    #372 — Life & Work

    Sam Harris speaks with George Saunders about his creative process. They discuss George’s involvement with Buddhism, the importance of kindness, psychedelics, writing as a practice, the work of Raymond Carver, the problem of social media, our current political moment, the role of fame in American culture, Wendell Berry, fiction as way of exploring good and evil, The Death of Ivan Ilyich, missed opportunities in ordinary life, what it means to be a more loving person, his article “The Incredible Buddha Boy,” the prison of reputation, Tolstoy, and other topics.

    If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.


    Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.

    #371 — What the Hell Is Happening?

    #371 — What the Hell Is Happening?

    Sam Harris speaks to Bill Maher about the state of the world. They discuss the aftermath of October 7th, the cowardice and confusion of many celebrities, gender apartheid, the failures of the Biden campaign, Bill’s relationship to his audience, the differences between the left and right, Megyn Kelly, loss of confidence in the media, expectations for the 2024 election, the security concerns of old-school Republicans, the prospect of a second Trump term, totalitarian regimes, functioning under medical uncertainty, Bill’s plan to stop doing stand-up (maybe), looking back on his career, his experience of fame, Jerry Seinfeld, and other topics.

    If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.

     

    Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.

     

    #370 — Gender Apartheid and the Future of Iran

    #370 — Gender Apartheid and the Future of Iran

    In today’s housekeeping, Sam explains his digital business model. He and Yasmine Mohammed (co-host) then speak with Masih Alinejad about gender apartheid in Iran. They discuss the Iranian revolution, the hypocrisy of Western feminists, the morality police and the significance of the hijab, the My Stealthy Freedom campaign, kidnapping and assassination plots against Masih, lack of action from the U.S. government, the effect of sanctions, the cowardice of Western journalists, the difference between the Iranian population and the Arab street, the unique perspective of Persian Jews, Islamism and immigration, the infiltration of universities, and other topics.

    If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.

     

    #369 — Escaping Death

    #369 — Escaping Death

    Sam Harris speaks with Sebastian Junger about danger and death. They discuss Sebastian's career as a journalist in war zones, the connection between danger and meaning, his experience of nearly dying from a burst aneurysm in his abdomen, his lingering trauma, the concept of "awe," psychedelics, near-death experiences, atheism, psychic phenomena, consciousness and the brain, and other topics.

    If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.


    Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.

    #368 — Freedom & Censorship

    #368 — Freedom & Censorship

    Sam Harris speaks with Greg Lukianoff about free speech and cancel culture. They discuss the origins of political correctness, free speech and its boundaries, the bedrock principle of the First Amendment, technology and the marketplace of ideas, epistemic anarchy, social media and cancellation, comparisons to McCarthyism, self-censorship by professors, cancellation from the Left and Right, justified cancellations, the Hunter Biden laptop story, how to deal with Trump in the media, the state of higher education in America, and other topics.

    If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.


    Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.

    #366 — Urban Warfare 2.0

    #366 — Urban Warfare 2.0

    Sam Harris speaks with John Spencer about the reality of urban warfare and Israel's conduct in the war in Gaza. They discuss the nature of the Hamas attacks on October 7th, what was most surprising about the Hamas videos, the difficulty in distinguishing Hamas from the rest of the population, combatants as a reflection of a society's values, how many people have been killed in Gaza, the proportion of combatants and noncombatants, the double standards to which the IDF is held, the worst criticism that can be made of Israel and the IDF, intentions vs results, what is unique about the war in Gaza, Hamas's use of human shields, what it would mean to defeat Hamas, what the IDF has accomplished so far, the destruction of the Gaza tunnel system, the details of underground warfare, the rescue of hostages, how noncombatants become combatants, how difficult it is to interpret videos of combat, what victory would look like, the likely aftermath of the war, war with Hezbollah, Iran's attack on Israel, what to do about Iran, and other topics.

    If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.

     

    Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.

     

    #365 — Reality Check

    #365 — Reality Check

    Sam Harris begins by remembering his friendship with Dan Dennett. He then speaks with David Wallace-Wells about the shattering of our information landscape. They discuss the false picture of reality produced during Covid, the success of the vaccines, how various countries fared during the pandemic, our preparation for a future pandemic, how we normalize danger and death, the current global consensus on climate change, the amount of warming we can expect, the consequence of a 2-degree Celsius warming, the effects of air pollution, global vs local considerations, Greta Thunberg and climate catastrophism, growth vs degrowth, market forces, carbon taxes, the consequences of political stagnation, the US national debt, the best way to attack the candidacy of Donald Trump, and other topics.

    If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.


    Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.

    #364 — Facts & Values

    #364 — Facts & Values

    Sam Harris revisits the central argument he made in his book, The Moral Landscape, about the reality of moral truth. He discusses the way concepts like “good” and “evil” can be thought about objectively, the primacy of our intuitions of truth and falsity, and the unity of knowledge.

    If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.


    Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.

    #363 — Knowledge Work

    #363 — Knowledge Work

    Sam Harris speaks with Cal Newport about our use of information technology and the cult of productivity. They discuss the state of social media, the "academic-in-exile effect," free speech and moderation, the effect of the pandemic on knowledge work, slow productivity, the example of Jane Austen, managing up in an organization, defragmenting one's work life, doing fewer things, reasonable deadlines, trading money for time, finding meaning in a post-scarcity world, the anti-work movement, the effects of artificial intelligence on knowledge work, and other topics.

    If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.


    Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.

    Related Episodes

    5 Ways AI Could Destroy Humanity

    5 Ways AI Could Destroy Humanity
    A reading of "Five ways AI might destroy the world: ‘Everyone on Earth could fall over dead in the same second’" https://www.theguardian.com/technology/2023/jul/07/five-ways-ai-might-destroy-the-world-everyone-on-earth-could-fall-over-dead-in-the-same-second Featuring Max Tegmark, Elizerer Yudkowsky, Joshua Bengio and more.    ABOUT THE AI BREAKDOWN The AI Breakdown helps you understand the most important news and discussions in AI.    Subscribe to The AI Breakdown newsletter: https://theaibreakdown.beehiiv.com/subscribe   Subscribe to The AI Breakdown on YouTube: https://www.youtube.com/@TheAIBreakdown   Join the community: bit.ly/aibreakdown   Learn more: http://breakdown.network/cc

    Ep. 1749 - Rogan, Musk and RFK Jr. SLAM ‘The Science’

    Ep. 1749 - Rogan, Musk and RFK Jr. SLAM ‘The Science’

    Joe Rogan and Elon Musk slam a covid-famous doctor who refuses to debate RFK Jr. on vaccines; Ireland moves to make free speech on trans issues illegal; and tape emerges of Donald Trump backing the trans agenda.


    Click here to join the member exclusive portion of my show: https://utm.io/ueSEj


    Ep.1749


    - - - 


    DailyWire+:


    Get 30% off Jeremy’s Razors products here: https://bit.ly/3xuFD43 


    Get your Ben Shapiro merch here: https://bit.ly/3TAu2cw


     - - - 


    Today’s Sponsors:


    ExpressVPN - Get 3 Months FREE of ExpressVPN: https://expressvpn.com/ben


    Balance of Nature - Get a FREE Fruit & Veggies Travel Set plus $25 off your first order as a preferred customer. Use promo code SHAPIRO at checkout: https://www.balanceofnature.com/


    Cynch - Download the Cynch app and get your first tank exchange for just $10 with promo code SHAPIRO. Visit http://cynch.com/offer for details.


    Helix - Get up to 20% OFF + 2 FREE pillows with all mattress orders: https://helixsleep.com/BEN


    PureTalk - Switch to PureTalk and get 50% off your first month! https://www.puretalkusa.com/landing/shapiro


    - - -


    Socials:


    Follow on Twitter: https://bit.ly/3cXUn53 


    Follow on Instagram: https://bit.ly/3QtuibJ 


    Follow on Facebook: https://bit.ly/3TTirqd 


    Subscribe on YouTube: https://bit.ly/3RPyBiB 

    Learn more about your ad choices. Visit podcastchoices.com/adchoices

    WARNING By Sam Harris: ChatGPT Could Be The Start Of The End. AI "Could Destroy Us, The Internet And Democracy"

    WARNING By Sam Harris: ChatGPT Could Be The Start Of The End. AI "Could Destroy Us, The Internet And Democracy"
    In this new episode Steven sits down with philosopher, neuroscientist, podcast host and author Sam Harris. In 2004, Sam published his first book, ‘The End of Faith’, this stayed on the New York Times bestseller list for 33 weeks and won the PEN/Martha Albrand Award for First Nonfiction. He has gone on to author 5 New York Times bestselling books published in over 20 languages. In 2009, Sam obtained his Ph.D. in cognitive neuroscience from the University of California, Los Angeles. In 2013, he began the ‘Waking Up’ podcast which covers subjects from meditation to AI. Sam is also the co-founder and CEO of Project Reason, a nonprofit foundation devoted to spreading scientific knowledge and secular values in society. In this conversation Sam and Steven discuss topics, such as: How to change peoples beliefs Why he is not optimistic about AI How to live an examined life Why you become what you pay attention to The reason the mind is all you really have moment by moment Why AI is not aligned with human wellbeing How it is too late to turn back the progression of AI The danger of misinformation Why we’re going to have to abandon the internet You can purchase Sam's book, 'Waking Up', here: https://bit.ly/3Qp51D7 Sam has kindly given DOAC listeners 30 days free trial on his app - Waking Up. Here is the link: https://bit.ly/3QxIrrZ Follow Sam: Instagram: https://bit.ly/3DHwOHy YouTube: https://bit.ly/3DE8RAy Watch the episodes on Youtube - https://g2ul0.app.link/3kxINCANKsb My new book! 'The 33 Laws Of Business & Life' pre order link: https://smarturl.it/DOACbook Follow me: Instagram: http://bit.ly/3nIkGAZ Twitter: http://bit.ly/3ztHuHm Linkedin: https://bit.ly/41Fl95Q Telegram: http://bit.ly/3nJYxST Sponsors: Huel: https://g2ul0.app.link/G4RjcdKNKsb Whoop: http://bit.ly/3MbapaY Learn more about your ad choices. Visit podcastchoices.com/adchoices

    A.I.'s Inner Conflict + Nvidia Joins the Trillion-Dollar Club + Hard Questions

    A.I.'s Inner Conflict + Nvidia Joins the Trillion-Dollar Club + Hard Questions

    A few days after a lawyer used ChatGPT to write a brief filled with made-up cases, a group of A.I. experts released a letter warning of the “risk of extinction” from the technology. But will A.I. ever be good enough to pose such a threat?

    Then, FAANG is now MAAAN, with the addition of Nvidia. Here’s how the GPU company became a trillion-dollar behemoth.

    Plus: Kevin, Casey and the New York Times tech reporter Kate Conger answer Hard Questions from listeners.

    Today’s Guest:

    • Kate Conger is a technology reporter in the San Francisco bureau of The New York Times.

    Additional Reading:

     

    #128 Yoshua Bengio: Dissecting The Extinction Threat of AI

    #128 Yoshua Bengio: Dissecting The Extinction Threat of AI

    Yoshua Bengio, the legendary AI expert, will join us for Episode 128 of Eye on AI podcast. In this episode, we delve into the unnerving question: Could the rise of a superhuman AI signal the downfall of humanity as we know it?

    Join us as we embark on an exploration of the existential threat posed by superhuman AI, leaving no stone unturned. We dissect the Future of Life Institute’s role in overseeing large language model development. As well as the sobering warnings issued by the Centre for AI Safety regarding artificial general intelligence. The stakes have never been higher, and we uncover the pressing need for action.

    Prepare to confront the disconcerting notion of society’s gradual disempowerment and an ever-increasing dependency on AI. We shed light on the challenges of extricating ourselves from this intricate web, where pulling the plug on AI seems almost impossible. Brace yourself for a thought-provoking discussion on the potential psychological effects of realizing that our relentless pursuit of AI advancement may inadvertently jeopardize humanity itself.

    In this episode, we dare to imagine a future where deep learning amplifies system-2 capabilities, forcing us to develop countermeasures and regulations to mitigate associated risks.

    We grapple with the possibility of leveraging AI to combat climate change, while treading carefully to prevent catastrophic outcomes.

    But that’s not all. We confront the notion of AI systems acting autonomously, highlighting the critical importance of stringent regulation surrounding their access and usage.

     

    (00:00) Preview

    (00:42) Introduction

    (03:30) Yoshua Bengio's essay on AI extinction  

    (09:45) Use cases for dangerous uses of AI  

    (12:00) Why are AI risks only happening now?

    (17:50) Extinction threat and fear with AI & climate change

    (21:10) Super intelligence and the concerns for humanity

    (15:02) Yoshua Bengio research in AI safety

    (29:50) Are corporations a form of artificial intelligence?

    (31:15) Extinction scenarios by Yoshua Bengio

    (37:00) AI agency and AI regulation

    (40:15) Who controls AI for the general public?

    (45:11) The AI debate in the world  

     

    Craig Smith Twitter: https://twitter.com/craigss

    Eye on A.I. Twitter: https://twitter.com/EyeOn_AI