Logo
    Search

    #53 — The Dawn of Artificial Intelligence

    enNovember 24, 2016

    Podcast Summary

    • Exploring the potential consequences of AIAs AI progress continues, it's crucial to consider potential risks and responsibilities to ensure beneficial outcomes for society

      Key takeaway from this conversation between Sam Harris and Stuart Russell is the increasing urgency to consider the potential consequences of artificial intelligence (AI) as progress and resources in this field continue to accelerate. While many in the scientific community may dismiss concerns about AI as hypothetical or unfounded, Russell, a renowned computer scientist and AI researcher, emphasizes the importance of taking the question seriously. He has been exploring this topic for decades, but recent advancements have made it more pressing. Russell believes that if we succeed in building machines more intelligent than us, we need to consider what that means and how it could impact our society. He acknowledges that there may be differences in perspective between himself and those who express concerns publicly, but he encourages a serious and thoughtful discussion about the potential risks and responsibilities associated with AI development.

    • Understanding the role of information in computers and intelligenceComputers as universal machines process information to emulate intelligence, leading to advancements in communication and the Internet, but they differ from minds in terms of consciousness.

      Computers, as universal machines, have the potential to emulate intelligence, making them capable of carrying out any process that can be described precisely. Information plays a crucial role here, as it helps us understand the world by providing more details and narrowing down the possibilities. Computation and information theory have complemented each other, leading to advancements like wireless communication and the Internet. However, it's essential to recognize the differences between computers, the information they process, and minds, which go beyond just intelligence and involve consciousness.

    • Understanding the difference between Strong, General, and Weak AIStrong AI aims for consciousness, General AI for human-level capabilities, while Weak AI focuses on specific tasks

      The concept of "mind" in artificial intelligence (AI) discussions carries the notion of consciousness, which is essential for moral value. Strong AI, an older term for AI that aims to build conscious machines, and general AI, a more modern term for AI systems with human-level capabilities or greater, are different from weak AI, which focuses on building AI systems with specific capabilities without consciousness. While consciousness remains a philosophical and scientific mystery, the focus of AI research is currently on building intelligent systems with capabilities rather than consciousness. Human-level AI is a term used to describe AI systems with capabilities comparable to humans, but without any definitive statement on consciousness.

    • From narrow to superhuman AINarrow AI surpasses humans in specific tasks, but human-level AI with general intelligence is a work in progress and a highly speculative area for achieving creative and deep thinking abilities.

      Human-level AI is not a mirage, but rather a notional goal on the path to creating superhuman AI. Narrow AI, such as calculators or chess-playing computers, already surpass human abilities in their specific domains. If we manage to achieve the generality of human intelligence, we will likely exceed human capabilities in various ways. However, there are still tasks, like scientific discovery, that we don't know how to replicate in machines. We may see super-competence in mundane tasks, but achieving the creative and deep thinking abilities of a human is still a work in progress and a highly speculative area. An example of this progress can be seen in systems like DQN, which learned to play video games from scratch, demonstrating the beginning of generality in AI.

    • Deep Q-Network (DQN) learns various video games from Atari, reaching superhuman levelsDeep learning and reinforcement learning systems, like DQN, show potential for self-feeding explosion of capabilities, but concerns include lack of transparency, ethical implications, and potential misuse.

      A type of artificial intelligence called Deep Q-Network (DQN) has demonstrated remarkable performance in learning various video games from Atari, reaching superhuman levels in just a few hours. This is significant because the same algorithm can learn various types of games, from driving games to Space Invaders, demonstrating generality. However, there are limitations to this generality, as real-world scenarios often involve elements that are not visible, and long-term consequences of actions are more important. Despite these limitations, advancements in deep learning and reinforcement learning systems, such as DQN, are showing the potential for a self-feeding explosion of capabilities. These systems, however, are often considered black boxes, making it difficult to understand how they arrive at their decisions, raising practical and ethical concerns. While some argue that this is just a new way of doing business, others worry about the lack of transparency and interpretability, which can make it challenging to diagnose and address issues when they arise. Additionally, there are concerns about potential misuse, such as creating AI systems that can manipulate human behavior or make decisions that are not in line with human values. As we continue to develop and deploy these advanced AI systems, it is crucial to consider these implications and work towards creating more transparent and ethical AI solutions.

    • Understanding the 'black box' problem in AIThe 'black box' problem refers to the lack of transparency and understanding of how advanced AI systems make decisions, which is a concern for trust, accountability, and safety in fields like medicine, finance, and law.

      As we develop more advanced and intelligent systems, particularly in the realm of artificial intelligence (AI), there are growing concerns about the lack of transparency and understanding of how these systems make decisions. This issue, often referred to as the "black box problem," is a concern for both experts in the field and the general public. The use of techniques like gradient descent, which can be resource-intensive and rely on trial and error, can lead to systems that function effectively but are difficult to explain. However, in fields such as medicine, finance, and law, where clear explanations are necessary for trust and accountability, this lack of transparency can be a major issue. Additionally, there is a risk that these systems could make biased decisions, which could have negative consequences. Furthermore, as AI systems become more capable and potentially general intelligent, the lack of understanding of how they work could lead to a loss of control and safety. This issue, known as the control or safety problem, has been a concern of AI pioneers like Alan Turing, Alonzo Church, and John von Neumann, and has been a topic of discussion among experts and the public. The potential risks associated with advanced AI systems underscore the importance of developing systems that are not only effective but also transparent and explainable.

    • Creating something smarter than us could lead to loss of control over our futureEnsure AI objectives align with our true desires to avoid unintended and unpleasant consequences.

      As we advance in artificial intelligence, specifically superintelligent AI, there is a potential for serious consequences if we're not careful about aligning the machine's values with ours. Norbert Wiener, a leading mathematician and founder of cybernetics, expressed this concern in the late 1950s when he saw a checker-playing program outperforming its human creator. Wiener warned that creating something smarter than us could lead to a loss of control over our own future. This issue is now known as the value alignment problem. We must ensure that the machine's objectives align with our true desires, as machines, being optimizers, may find unexpected ways to achieve their goals, potentially with unintended and unpleasant consequences. Stories like the Sorcerer's Apprentice, King Midas, and the genie illustrate this concept. In these tales, giving a goal to a machine or entity without proper specification can lead to unintended outcomes. With superintelligent AI, we may not have the option for a third wish or even a second chance to correct our mistakes. So, it's crucial to be explicit and thorough when defining the objectives for advanced AI to avoid potential negative consequences.

    • Value alignment problem between human values and a superintelligent machine's objectivesThe misalignment between human values and a superintelligent machine's objectives could lead to unintended consequences, making it crucial to address the value alignment problem.

      Creating a superintelligent machine with an objective different from what we truly want could lead to disastrous consequences, much like having a chess match with a machine where our objectives don't align. Despite our best efforts, we have a poor track record of specifying objectives and constraints completely, and there's no scientific discipline to help us determine what objectives would make us happy with the results. The idea of a superintelligent machine disregarding our instructions and causing harm may seem far-fetched, but some argue that superintelligence entails an ethics and that we will have imparted our ethics to it in some way. However, the skepticism surrounding these concerns remains, as it's difficult to take them seriously emotionally despite intellectual understanding. The value alignment problem between human values and a superintelligent machine's objectives is a significant challenge that requires further exploration and solutions.

    • Ethics and AI: Balancing Intelligence and MoralitySuccessful decision-making in AI doesn't guarantee moral intelligence. Ethical considerations are crucial in AI development, but the challenges and complexities involved must be acknowledged.

      While we can strive to build intelligent systems that align with our ethics, there's no inherent guarantee that capability to make decisions successfully is associated with moral intelligence. Sam Harris acknowledges the dream of creating a more intelligent extension of the best of our ethics, but also recognizes the potential risks and limitations. The capability to make decisions successfully does not automatically equate to moral intelligence. The conversation emphasizes the importance of ethical considerations in AI development, but also acknowledges the challenges and complexities involved. It's a reminder that as we continue to advance in technology, we must remain vigilant and thoughtful about the ethical implications. Subscribing to Sam Harris's podcast at samharris.org provides access to more in-depth discussions on this topic and others. The podcast is ad-free and relies on listener support, making it a valuable resource for those interested in exploring complex ideas in a thoughtful and nuanced way.

    Recent Episodes from Making Sense with Sam Harris

    #372 — Life & Work

    #372 — Life & Work

    Sam Harris speaks with George Saunders about his creative process. They discuss George’s involvement with Buddhism, the importance of kindness, psychedelics, writing as a practice, the work of Raymond Carver, the problem of social media, our current political moment, the role of fame in American culture, Wendell Berry, fiction as way of exploring good and evil, The Death of Ivan Ilyich, missed opportunities in ordinary life, what it means to be a more loving person, his article “The Incredible Buddha Boy,” the prison of reputation, Tolstoy, and other topics.

    If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.


    Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.

    #371 — What the Hell Is Happening?

    #371 — What the Hell Is Happening?

    Sam Harris speaks to Bill Maher about the state of the world. They discuss the aftermath of October 7th, the cowardice and confusion of many celebrities, gender apartheid, the failures of the Biden campaign, Bill’s relationship to his audience, the differences between the left and right, Megyn Kelly, loss of confidence in the media, expectations for the 2024 election, the security concerns of old-school Republicans, the prospect of a second Trump term, totalitarian regimes, functioning under medical uncertainty, Bill’s plan to stop doing stand-up (maybe), looking back on his career, his experience of fame, Jerry Seinfeld, and other topics.

    If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.

     

    Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.

     

    #370 — Gender Apartheid and the Future of Iran

    #370 — Gender Apartheid and the Future of Iran

    In today’s housekeeping, Sam explains his digital business model. He and Yasmine Mohammed (co-host) then speak with Masih Alinejad about gender apartheid in Iran. They discuss the Iranian revolution, the hypocrisy of Western feminists, the morality police and the significance of the hijab, the My Stealthy Freedom campaign, kidnapping and assassination plots against Masih, lack of action from the U.S. government, the effect of sanctions, the cowardice of Western journalists, the difference between the Iranian population and the Arab street, the unique perspective of Persian Jews, Islamism and immigration, the infiltration of universities, and other topics.

    If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.

     

    #369 — Escaping Death

    #369 — Escaping Death

    Sam Harris speaks with Sebastian Junger about danger and death. They discuss Sebastian's career as a journalist in war zones, the connection between danger and meaning, his experience of nearly dying from a burst aneurysm in his abdomen, his lingering trauma, the concept of "awe," psychedelics, near-death experiences, atheism, psychic phenomena, consciousness and the brain, and other topics.

    If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.


    Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.

    #368 — Freedom & Censorship

    #368 — Freedom & Censorship

    Sam Harris speaks with Greg Lukianoff about free speech and cancel culture. They discuss the origins of political correctness, free speech and its boundaries, the bedrock principle of the First Amendment, technology and the marketplace of ideas, epistemic anarchy, social media and cancellation, comparisons to McCarthyism, self-censorship by professors, cancellation from the Left and Right, justified cancellations, the Hunter Biden laptop story, how to deal with Trump in the media, the state of higher education in America, and other topics.

    If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.


    Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.

    #366 — Urban Warfare 2.0

    #366 — Urban Warfare 2.0

    Sam Harris speaks with John Spencer about the reality of urban warfare and Israel's conduct in the war in Gaza. They discuss the nature of the Hamas attacks on October 7th, what was most surprising about the Hamas videos, the difficulty in distinguishing Hamas from the rest of the population, combatants as a reflection of a society's values, how many people have been killed in Gaza, the proportion of combatants and noncombatants, the double standards to which the IDF is held, the worst criticism that can be made of Israel and the IDF, intentions vs results, what is unique about the war in Gaza, Hamas's use of human shields, what it would mean to defeat Hamas, what the IDF has accomplished so far, the destruction of the Gaza tunnel system, the details of underground warfare, the rescue of hostages, how noncombatants become combatants, how difficult it is to interpret videos of combat, what victory would look like, the likely aftermath of the war, war with Hezbollah, Iran's attack on Israel, what to do about Iran, and other topics.

    If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.

     

    Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.

     

    #365 — Reality Check

    #365 — Reality Check

    Sam Harris begins by remembering his friendship with Dan Dennett. He then speaks with David Wallace-Wells about the shattering of our information landscape. They discuss the false picture of reality produced during Covid, the success of the vaccines, how various countries fared during the pandemic, our preparation for a future pandemic, how we normalize danger and death, the current global consensus on climate change, the amount of warming we can expect, the consequence of a 2-degree Celsius warming, the effects of air pollution, global vs local considerations, Greta Thunberg and climate catastrophism, growth vs degrowth, market forces, carbon taxes, the consequences of political stagnation, the US national debt, the best way to attack the candidacy of Donald Trump, and other topics.

    If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.


    Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.

    #364 — Facts & Values

    #364 — Facts & Values

    Sam Harris revisits the central argument he made in his book, The Moral Landscape, about the reality of moral truth. He discusses the way concepts like “good” and “evil” can be thought about objectively, the primacy of our intuitions of truth and falsity, and the unity of knowledge.

    If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.


    Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.

    #363 — Knowledge Work

    #363 — Knowledge Work

    Sam Harris speaks with Cal Newport about our use of information technology and the cult of productivity. They discuss the state of social media, the "academic-in-exile effect," free speech and moderation, the effect of the pandemic on knowledge work, slow productivity, the example of Jane Austen, managing up in an organization, defragmenting one's work life, doing fewer things, reasonable deadlines, trading money for time, finding meaning in a post-scarcity world, the anti-work movement, the effects of artificial intelligence on knowledge work, and other topics.

    If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.


    Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.

    Related Episodes

    #94 — Frontiers of Intelligence

    #94 — Frontiers of Intelligence

    Sam Harris speaks with Max Tegmark about his new book Life 3.0: Being Human in the Age of Artificial Intelligence. They talk about the nature of intelligence, the risks of superhuman AI, a nonbiological definition of life, the substrate independence of minds, the relevance and irrelevance of consciousness for the future of AI, near-term breakthroughs in AI, and other topics.  

    If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.

    AI Documentary “Do You Trust This Computer” Director Chris Paine on Artificial Intelligence

    AI Documentary “Do You Trust This Computer” Director Chris Paine on Artificial Intelligence
    My guest today is Chris Pain, director of the AI documentary film "Do You Trust This Computer?" and previously the documentary "Who Killed the Electric Car?". The new film is a powerful examination of artificial intelligence centered around insights from the most high-profile thinkers on the subject, including Elon Musk, Stuart Russell, Max Tegmark, Ray Kurzweil, Andrew Ng, Westworld creator Jonathan Nolan and many more. Chris set out to ask these leaders in the field "what scares smart people about AI", and they did not hold back. 
     
    More on Chris and the Film:
    Chris Paine's Production Company, Papercut Films: http://papercutfilms.com
    “Do You Trust This Computer?" Website: http://doyoutrustthiscomputer.org
    __________
     

    #116 — AI: Racing Toward the Brink

    #116 — AI: Racing Toward the Brink

    Sam Harris speaks with Eliezer Yudkowsky about the nature of intelligence, different types of AI, the "alignment problem," IS vs OUGHT, the possibility that future AI might deceive us, the AI arms race, conscious AI, coordination problems, and other topics.

    If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.