Logo
    Search

    superintelligent ai

    Explore "superintelligent ai" with insightful episodes like "Coming soon: Superintelligent AI", "Regulatory Capture? Meta AI Chief Accuses OpenAI, Anthropic of Stoking Fears", "Forget Alignment, Here's Why Every AI Needs an Individual "Soul"", "The Most Frightening Article I’ve Ever Read (Ep 1988)" and "Making Sense of Artificial Intelligence | Episode 1 of The Essential Sam Harris" from podcasts like ""Behind the Money", "The AI Breakdown: Daily Artificial Intelligence News and Discussions", "The AI Breakdown: Daily Artificial Intelligence News and Discussions", "The Dan Bongino Show" and "Making Sense with Sam Harris"" and more!

    Episodes (8)

    Coming soon: Superintelligent AI

    Coming soon: Superintelligent AI

     In a new series of Tech Tonic, FT journalists Madhumita Murgia and John Thornhill look at the concerns around the rise of artificial intelligence. Will superintelligent AI bring existential risk, or a new renaissance? Would it be ethical to build conscious AI? How intelligent are these machines anyway? The new season of Tech Tonic from the Financial Times, drops mid-November.


    Presented by Madhumita Murgia and John Thornhill. Senior producer is Edwin Lane and producer Josh Gabert-Doyon. Executive produced by Manuela Saragosa. Sound design by Breen Turner and Samantha Giovinco. Original music by Metaphor Music. The FT’s head of audio is Cheryl Brumley.



    Hosted on Acast. See acast.com/privacy for more information.


    Regulatory Capture? Meta AI Chief Accuses OpenAI, Anthropic of Stoking Fears

    Regulatory Capture? Meta AI Chief Accuses OpenAI, Anthropic of Stoking Fears
    The war of words between the AI safety advocates on the one side and the AI open source advocates on the other has gotten increasingly contentious. This week, Meta's Chief AI Scientist Yan LeCun accused the leaders of Google DeepMind, OpenAI and Anthropic of stoking fears around AI in order to provoke a response from regulators that would be beneficial to their businesses and bad for open source competitors. Today's Sponsors: Listen to the chart-topping podcast 'web3 with a16z crypto' wherever you get your podcasts or here: https://link.chtbl.com/xz5kFVEK?sid=AIBreakdown  Interested in the opportunity mentioned in today's show? jobs@breakdown.network ABOUT THE AI BREAKDOWN The AI Breakdown helps you understand the most important news and discussions in AI.  Subscribe to The AI Breakdown newsletter: https://theaibreakdown.beehiiv.com/subscribe Subscribe to The AI Breakdown on YouTube: https://www.youtube.com/@TheAIBreakdown Join the community: bit.ly/aibreakdown Learn more: http://breakdown.network/

    Forget Alignment, Here's Why Every AI Needs an Individual "Soul"

    Forget Alignment, Here's Why Every AI Needs an Individual "Soul"
    A reading of David Brin's "Give Every AI a Soul - or Else" https://www.wired.com/story/give-every-ai-a-soul-or-else/ ABOUT THE AI BREAKDOWN The AI Breakdown helps you understand the most important news and discussions in AI.    Subscribe to The AI Breakdown newsletter: https://theaibreakdown.beehiiv.com/subscribe   Subscribe to The AI Breakdown on YouTube: https://www.youtube.com/@TheAIBreakdown   Join the community: bit.ly/aibreakdown   Learn more: http://breakdown.network/

    The Most Frightening Article I’ve Ever Read (Ep 1988)

    The Most Frightening Article I’ve Ever Read (Ep 1988)
    In this episode, I address the single most disturbing article I’ve ever read. It addresses the ominous threat of out-of-control artificial intelligence. The threat is here.  News Picks: The article about the dangers of AI that people are talking about. An artificial intelligence program plots the destruction of human-kind. More information surfaces about the FBI spying scandal on Christians. San Francisco Whole Foods closes only a year after opening. This is an important piece about the parallel economy and the Second Amendment.  Copyright Bongino Inc All Rights Reserved Learn more about your ad choices. Visit podcastchoices.com/adchoices

    Making Sense of Artificial Intelligence | Episode 1 of The Essential Sam Harris

    Making Sense of Artificial Intelligence | Episode 1 of The Essential Sam Harris

    Filmmaker Jay Shapiro has produced a new series of audio documentaries, exploring the major topics that Sam has focused on over the course of his career.

    Each episode weaves together original analysis, critical perspective, and novel thought experiments with some of the most compelling exchanges from the Making Sense archive. Whether you are new to a particular topic, or think you have your mind made up about it, we think you’ll find this series fascinating.

    In this episode, we explore the landscape of Artificial Intelligence. We’ll listen in on Sam’s conversation with decision theorist and artificial-intelligence researcher Eliezer Yudkowsky, as we consider the potential dangers of AI – including the control problem and the value-alignment problem – as well as the concepts of Artificial General Intelligence, Narrow Artificial Intelligence, and Artificial Super Intelligence.

    We’ll then be introduced to philosopher Nick Bostrom’s “Genies, Sovereigns, Oracles, and Tools,” as physicist Max Tegmark outlines just how careful we need to be as we travel down the AI path. Computer scientist Stuart Russell will then dig deeper into the value-alignment problem and explain its importance.
     
    We’ll hear from former Google CEO Eric Schmidt about the geopolitical realities of AI terrorism and weaponization. We’ll then touch the topic of consciousness as Sam and psychologist Paul Bloom turn the conversation to the ethical and psychological complexities of living alongside humanlike AI. Psychologist Alison Gopnik then reframes the general concept of intelligence to help us wonder if the kinds of systems we’re building using “Deep Learning” are really marching us towards our super-intelligent overlords.
     
    Finally, physicist David Deutsch will argue that many value-alignment fears about AI are based on a fundamental misunderstanding about how knowledge actually grows in this universe.
     

    From the Vault: The Great Basilisk

    From the Vault: The Great Basilisk

    Behold the Great Basilisk, the crowned monster whose mere glance can kill a mortal and reduce wilderness to desert ash. Medieval bestiaries attest to its might, but today some futurists dread its name as an all-powerful malicious artificial intelligence. Will it imprison all those who oppose it within a digital prison of eternal torment? In this episode of Stuff to Blow Your Mind, Robert Lamb and Joe McCormick consider the horror of Roko’s Basilisk. (Originally published 10/9/2018)

    Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

    See omnystudio.com/listener for privacy information.

    Future Consequences

    Future Consequences
    Original broadcast date: September 15, 2017. From data collection to gene editing to AI, what we once considered science fiction is now becoming reality. This hour, TED speakers explore the future consequences of our present actions.Guests include designer Anab Jain, futurist Juan Enriquez, biologist Paul Knoepfler, and neuroscientist and philosopher Sam Harris.

    Learn more about sponsor message choices: podcastchoices.com/adchoices

    NPR Privacy Policy

    Machine God: Artificial Superintelligence

    Machine God: Artificial Superintelligence

    Voltaire famously postulated that if God did not exist, it would be necessary to invent him. As humanity approaches the technological singularity, this statement takes on new meaning. In this episode of Stuff to Blow Your Mind, Robert and Joe contemplate the nature of gods and what it would mean to create an artificial superintelligence.

    Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

    See omnystudio.com/listener for privacy information.