Logo

    AI Is Moving Fast. We Need Laws that Will Too.

    enSeptember 13, 2024
    What was the main topic of the podcast episode?
    Summarise the key points discussed in the episode?
    Were there any notable quotes or insights from the speakers?
    Which popular books were mentioned in this episode?
    Were there any points particularly controversial or thought-provoking discussed in the episode?
    Were any current events or trending topics addressed in the episode?

    Podcast Summary

    • Accountability in TechNew liability laws for tech companies are essential to hold them accountable for harmful products, especially AI, and protect public interest over profits.

      Technology companies need new laws to ensure they are held accountable for their products, especially with the rise of AI. Current regulations often allow these companies to avoid responsibility for harmful impacts. By shifting focus towards liability for defective products, similar to traditional industries, we can ensure better protections for the public. This shift towards accountability is crucial for fostering responsible innovation and addressing the complexities of modern tech issues. Establishing a legal framework that promotes safety from the ground up is necessary for addressing addiction, misinformation, and other challenges posed by the tech industry, ensuring they prioritize societal well-being over just profits.

    • AI LiabilityAI and social media are complex products that can cause harm. New laws are needed to clarify liability and protect consumers, as current courts may not adapt quickly enough to new technology.

      Artificial intelligence (AI) and social media are complex products that can harm people and businesses. Current legal systems may struggle to adapt to the rapid advancements in technology, which is why new federal laws are suggested to provide clarity. Historically, product liability laws helped ensure trust in products, regardless of their tangible form. For example, if a customer receives incorrect information from a chatbot, the responsible company may face liability for the mistakes made by its technology. Updating these laws is crucial to protect consumers and encourage businesses to innovate safely, as courts cannot keep up with the fast pace of technological change. Addressing these issues proactively can ensure that both individuals and businesses are treated fairly and that accountability is maintained in a rapidly evolving digital landscape.

    • Accountability in AIAI developers should be accountable for misuse of their tools, like deep fakes, to encourage safer design practices, similar to how tobacco companies were held responsible for secondhand smoke harm.

      Emerging technologies, especially AI, come with risks such as misuse and scams, like deep fakes that can financially harm individuals. Developers should be held responsible to encourage safer designs, similar to how cigarette companies were eventually punished for harming non-smokers with secondhand smoke. By shifting accountability, developers would create more secure products, ultimately protecting users from potential harm. Just as a hammer should not be used to cause injury, AI tools also carry a risk of misuse, and companies must take steps to minimize these risks; otherwise, they should face liability if people are harmed. This approach would drive innovation in safer technology usage, benefiting society overall.

    • AI AccountabilityA duty of care for AI companies promotes innovation for safety while holding them accountable for harm, balancing technological growth with social responsibility.

      Implementing a duty of care for AI developers encourages them to innovate solutions for safety and accountability. By focusing on the responsibilities of these companies instead of restricting their freedom, we can create better products that safeguard society. This approach promotes collaboration with regulators and ensures that firms consider the potential harm caused by their technologies, as they would be financially accountable for any damage. As a result, AI companies would be motivated to provide adequate warnings and protections, leading to a healthier interaction with their products. Ultimately, this framework aims to balance innovation with social responsibility, fostering an environment where developers strive to create safer outcomes.

    • Accountability ShiftEmpowering legal teams in tech companies to prioritize safety can drive real change, moving safety from a PR stunt to a central business strategy, especially as technology rapidly advances beyond current laws.

      As technology evolves rapidly, laws often lag behind, leading to new risks and harms that aren’t yet regulated. This gap requires tech companies to adopt a more responsible approach proactively. By empowering company attorneys to enforce safety standards, the looming threat of lawsuits can drive meaningful changes, making safety integral to their business strategy rather than just a public relations concern. This shift can prevent companies from neglecting safety as they currently do, which is often seen as merely a PR issue. A strong legal framework instills accountability in tech firms, ensuring they prioritize safety and responsible practices. Businesses will start investing in real safety measures, stabilizing their operations for the long run and truly safeguarding society from the adverse impacts of technology.

    • AI AccountabilityNew AI legislation aims to hold developers accountable for serious risks, but clarity in legal responsibilities is essential. A federal liability policy is proposed, with efforts underway to secure bipartisan support for effective AI regulation.

      Over the years, companies have often avoided recalls, opting instead to settle lawsuits, but now, with the widespread use of AI, there’s growing concern about potential legal liabilities. New legislation, like California's SB1047, focuses on serious risks from AI, particularly mass harm to infrastructure. However, it does not clearly define the responsibilities of AI developers. For effective change, we need clear legal standards that allow individuals to fight back against companies. A federal liability policy has been proposed to encourage responsible AI use, and the next steps involve drafting legislation and finding bipartisan sponsors. There’s hope that bipartisan support will come from a shared desire for fairness and the urgent need to address these issues, especially as concerns about technology's impact continue to grow.

    • AI Liability ImpactLiability standards for AI will promote safer practices and boost responsible adoption of AI technologies by businesses.

      Implementing liability standards for AI could greatly influence how AI technologies are developed and used. Companies would be motivated to prioritize safety and responsible deployment of AI, rather than just racing to launch products. This change could lead to better innovation, enhance public trust, and encourage businesses to adopt AI technologies more confidently, resulting in the U.S. setting a global example.

    • AI Liability FrameworkA new liability framework for AI aims to address current and future harms, allowing for better regulations and collaboration among tech companies to ensure safer technology use.

      Introducing a liability framework for AI can help manage present and future harms caused by technology. This approach allows us to pause and rethink existing laws, giving policymakers the chance to create effective regulations. Although this won't cover every AI risk, it's a crucial first step toward managing specific injuries and fostering collaboration among major tech companies to find solutions that benefit everyone.

    Recent Episodes from Your Undivided Attention

    AI Is Moving Fast. We Need Laws that Will Too.

    AI Is Moving Fast. We Need Laws that Will Too.

    AI is moving fast. And as companies race to rollout newer, more capable models–with little regard for safety–the downstream risks of those models become harder and harder to counter. On this week’s episode of Your Undivided Attention, CHT’s policy director Casey Mock comes on the show to discuss a new legal framework to incentivize better AI, one that holds AI companies liable for the harms of their products. 

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    RECOMMENDED MEDIA

    The CHT Framework for Incentivizing Responsible AI Development

    Further Reading on Air Canada’s Chatbot Fiasco 

    Further Reading on the Elon Musk Deep Fake Scams 

    The Full Text of SB1047, California’s AI Regulation Bill 

    Further reading on SB1047 

    RECOMMENDED YUA EPISODES

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    Can We Govern AI? with Marietje Schaake

    A First Step Toward AI Regulation with Tom Wheeler

    Correction: Casey incorrectly stated the year that the US banned child labor as 1937. It was banned in 1938.

    Esther Perel on Artificial Intimacy (rerun)

    Esther Perel on Artificial Intimacy (rerun)

    [This episode originally aired on August 17, 2023] For all the talk about AI, we rarely hear about how it will change our relationships. As we swipe to find love and consult chatbot therapists, acclaimed psychotherapist and relationship expert Esther Perel warns that there’s another harmful “AI” on the rise — Artificial Intimacy — and how it is depriving us of real connection. Tristan and Esther discuss how depending on algorithms can fuel alienation, and then imagine how we might design technology to strengthen our social bonds.

    RECOMMENDED MEDIA 

    Mating in Captivity by Esther Perel

    Esther's debut work on the intricacies behind modern relationships, and the dichotomy of domesticity and sexual desire

    The State of Affairs by Esther Perel

    Esther takes a look at modern relationships through the lens of infidelity

    Where Should We Begin? with Esther Perel

    Listen in as real couples in search of help bare the raw and profound details of their stories

    How’s Work? with Esther Perel

    Esther’s podcast that focuses on the hard conversations we're afraid to have at work 

    Lars and the Real Girl (2007)

    A young man strikes up an unconventional relationship with a doll he finds on the internet

    Her (2013)

    In a near future, a lonely writer develops an unlikely relationship with an operating system designed to meet his every need

    RECOMMENDED YUA EPISODES

    Big Food, Big Tech and Big AI with Michael Moss

    The AI Dilemma

    The Three Rules of Humane Tech

    Digital Democracy is Within Reach with Audrey Tang

     

    CORRECTION: Esther refers to the 2007 film Lars and the Real Doll. The title of the film is Lars and the Real Girl.
     

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Tech's Big Money Campaign is Getting Pushback with Margaret O'Mara and Brody Mullins

    Tech's Big Money Campaign is Getting Pushback with Margaret O'Mara and Brody Mullins

    Today, the tech industry is  the second-biggest lobbying power in Washington, DC, but that wasn’t true as recently as ten years ago. How did we get to this moment? And where could we be going next? On this episode of Your Undivided Attention, Tristan and Daniel sit down with historian Margaret O’Mara and journalist Brody Mullins to discuss how Silicon Valley has changed the nature of American lobbying. 

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    RECOMMENDED MEDIA

    The Wolves of K Street: The Secret History of How Big Money Took Over Big Government - Brody’s book on the history of lobbying.

    The Code: Silicon Valley and the Remaking of America - Margaret’s book on the historical relationship between Silicon Valley and Capitol Hill

    More information on the Google antitrust ruling

    More Information on KOSPA

    More information on the SOPA/PIPA internet blackout

    Detailed breakdown of Internet lobbying from Open Secrets

     

    RECOMMENDED YUA EPISODES

    U.S. Senators Grilled Social Media CEOs. Will Anything Change?

    Can We Govern AI? with Marietje Schaake
    The Race to Cooperation with David Sloan Wilson

     

    CORRECTION: Brody Mullins refers to AT&T as having a “hundred million dollar” lobbying budget in 2006 and 2007. While we couldn’t verify the size of their budget for lobbying, their actual lobbying spend was much less than this: $27.4m in 2006 and $16.5m in 2007, according to OpenSecrets.

     

    The views expressed by guests appearing on Center for Humane Technology’s podcast, Your Undivided Attention, are their own, and do not necessarily reflect the views of CHT. CHT does not support or oppose any candidate or party for election to public office

     

    This Moment in AI: How We Got Here and Where We’re Going

    This Moment in AI: How We Got Here and Where We’re Going

    It’s been a year and half since Tristan and Aza laid out their vision and concerns for the future of artificial intelligence in The AI Dilemma. In this Spotlight episode, the guys discuss what’s happened since then–as funding, research, and public interest in AI has exploded–and where we could be headed next. Plus, some major updates on social media reform, including the passage of the Kids Online Safety and Privacy Act in the Senate. 

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

     

    RECOMMENDED MEDIA

    The AI Dilemma: Tristan and Aza’s talk on the catastrophic risks posed by AI.

    Info Sheet on KOSPA: More information on KOSPA from FairPlay.

    Situational Awareness by Leopold Aschenbrenner: A widely cited blog from a former OpenAI employee, predicting the rapid arrival of AGI.

    AI for Good: More information on the AI for Good summit that was held earlier this year in Geneva. 

    Using AlphaFold in the Fight Against Plastic Pollution: More information on Google’s use of AlphaFold to create an enzyme to break down plastics. 

    Swiss Call For Trust and Transparency in AI: More information on the initiatives mentioned by Katharina Frey.

     

    RECOMMENDED YUA EPISODES

    War is a Laboratory for AI with Paul Scharre

    Jonathan Haidt On How to Solve the Teen Mental Health Crisis

    Can We Govern AI? with Marietje Schaake 

    The Three Rules of Humane Tech

    The AI Dilemma

     

    Clarification: Swiss diplomat Nina Frey’s full name is Katharina Frey.

     

     

    Your Undivided Attention
    enAugust 12, 2024

    Decoding Our DNA: How AI Supercharges Medical Breakthroughs and Biological Threats with Kevin Esvelt

    Decoding Our DNA: How AI Supercharges Medical Breakthroughs and Biological Threats with Kevin Esvelt

    AI has been a powerful accelerant for biological research, rapidly opening up new frontiers in medicine and public health. But that progress can also make it easier for bad actors to manufacture new biological threats. In this episode, Tristan and Daniel sit down with biologist Kevin Esvelt to discuss why AI has been such a boon for biologists and how we can safeguard society against the threats that AIxBio poses.

    RECOMMENDED MEDIA

    Sculpting Evolution: Information on Esvelt’s lab at MIT.

    SecureDNA: Esvelt’s free platform to provide safeguards for DNA synthesis.

    The Framework for Nucleic Acid Synthesis Screening: The Biden admin’s suggested guidelines for DNA synthesis regulation.

    Senate Hearing on Regulating AI Technology: C-SPAN footage of Dario Amodei’s testimony to Congress.

    The AlphaFold Protein Structure Database

    RECOMMENDED YUA EPISODES

    U.S. Senators Grilled Social Media CEOs. Will Anything Change?

    Big Food, Big Tech and Big AI with Michael Moss

    The AI Dilemma

    Clarification: President Biden’s executive order only applies to labs that receive funding from the federal government, not state governments.

    How to Think About AI Consciousness With Anil Seth

    How to Think About AI Consciousness With Anil Seth

    Will AI ever start to think by itself? If it did, how would we know, and what would it mean?

    In this episode, Dr. Anil Seth and Aza discuss the science, ethics, and incentives of artificial consciousness. Seth is Professor of Cognitive and Computational Neuroscience at the University of Sussex and the author of Being You: A New Science of Consciousness.

    RECOMMENDED MEDIA

    Frankenstein by Mary Shelley

    A free, plain text version of the Shelley’s classic of gothic literature.

    OpenAI’s GPT4o Demo

    A video from OpenAI demonstrating GPT4o’s remarkable ability to mimic human sentience.

    You Can Have the Blue Pill or the Red Pill, and We’re Out of Blue Pills

    The NYT op-ed from last year by Tristan, Aza, and Yuval Noah Harari outlining the AI dilemma. 

    What It’s Like to Be a Bat

    Thomas Nagel’s essay on the nature of consciousness.

    Are You Living in a Computer Simulation?

    Philosopher Nick Bostrom’s essay on the simulation hypothesis.

    Anthropic’s Golden Gate Claude

    A blog post about Anthropic’s recent discovery of millions of distinct concepts within their LLM, a major development in the field of AI interpretability.

    RECOMMENDED YUA EPISODES

    Esther Perel on Artificial Intimacy

    Talking With Animals... Using AI

    Synthetic Humanity: AI & What’s At Stake

    Why Are Migrants Becoming AI Test Subjects? With Petra Molnar

    Why Are Migrants Becoming AI Test Subjects? With Petra Molnar

    Climate change, political instability, hunger. These are just some of the forces behind an unprecedented refugee crisis that’s expected to include over a billion people by 2050. In response to this growing crisis, wealthy governments like the US and the EU are employing novel AI and surveillance technologies to slow the influx of migrants at their borders. But will this rollout stop at the border?

    In this episode, Tristan and Aza sit down with Petra Molnar to discuss how borders have become a proving ground for the sharpest edges of technology, and especially AI. Petra is an immigration lawyer and co-creator of the Migration and Technology Monitor. Her new book is “The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence.”

    RECOMMENDED MEDIA

    The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence

    Petra’s newly published book on the rollout of high risk tech at the border.

    Bots at the Gate

    A report co-authored by Petra about Canada’s use of AI technology in their immigration process.

    Technological Testing Grounds

    A report authored by Petra about the use of experimental technology in EU border enforcement.

    Startup Pitched Tasing Migrants from Drones, Video Reveals

    An article from The Intercept, containing the demo for Brinc’s taser drone pilot program.

    The UNHCR

    Information about the global refugee crisis from the UN.

    RECOMMENDED YUA EPISODES

    War is a Laboratory for AI with Paul Scharre

    No One is Immune to AI Harms with Dr. Joy Buolamwini

    Can We Govern AI? With Marietje Schaake

    CLARIFICATION:

    The iBorderCtrl project referenced in this episode was a pilot project that was discontinued in 2019

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    This week, a group of current and former employees from Open AI and Google Deepmind penned an open letter accusing the industry’s leading companies of prioritizing profits over safety. This comes after a spate of high profile departures from OpenAI, including co-founder Ilya Sutskever and senior researcher Jan Leike, as well as reports that OpenAI has gone to great lengths to silence would-be whistleblowers. 

    The writers of the open letter argue that researchers have a “right to warn” the public about AI risks and laid out a series of principles that would protect that right. In this episode, we sit down with one of those writers: William Saunders, who left his job as a research engineer at OpenAI in February. William is now breaking the silence on what he saw at OpenAI that compelled him to leave the company and to put his name to this letter. 

    RECOMMENDED MEDIA 

    The Right to Warn Open Letter 

    My Perspective On "A Right to Warn about Advanced Artificial Intelligence": A follow-up from William about the letter

     Leaked OpenAI documents reveal aggressive tactics toward former employees: An investigation by Vox into OpenAI’s policy of non-disparagement.

    RECOMMENDED YUA EPISODES

    1. A First Step Toward AI Regulation with Tom Wheeler 
    2. Spotlight on AI: What Would It Take For This to Go Well? 
    3. Big Food, Big Tech and Big AI with Michael Moss 
    4. Can We Govern AI? with Marietje Schaake

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    War is a Laboratory for AI with Paul Scharre

    War is a Laboratory for AI with Paul Scharre

    Right now, militaries around the globe are investing heavily in the use of AI weapons and drones.  From Ukraine to Gaza, weapons systems with increasing levels of autonomy are being used to kill people and destroy infrastructure and the development of fully autonomous weapons shows little signs of slowing down. What does this mean for the future of warfare? What safeguards can we put up around these systems? And is this runaway trend toward autonomous warfare inevitable or will nations come together and choose a different path? In this episode, Tristan and Daniel sit down with Paul Scharre to try to answer some of these questions. Paul is a former Army Ranger, the author of two books on autonomous weapons and he helped the Department of Defense write a lot of its policy on the use of AI in weaponry. 

    RECOMMENDED MEDIA

    Four Battlegrounds: Power in the Age of Artificial Intelligence: Paul’s book on the future of AI in war, which came out in 2023.

    Army of None: Autonomous Weapons and the Future of War: Paul’s 2018 book documenting and predicting the rise of autonomous and semi-autonomous weapons as part of modern warfare.

    The Perilous Coming Age of AI Warfare: How to Limit the Threat of Autonomous Warfare: Paul’s article in Foreign Affairs based on his recent trip to the battlefield in Ukraine.

    The night the world almost almost ended: A BBC documentary about Stanislav Petrov’s decision not to start nuclear war.

    AlphaDogfight Trials Final Event: The full simulated dogfight between an AI and human pilot. The AI pilot swept, 5-0.

    RECOMMENDED YUA EPISODES

    1. The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen Hao
    2. Can We Govern AI? with Marietje Schaake
    3. Big Food, Big Tech and Big AI with Michael Moss
    4. The Invisible Cyber-War with Nicole Perlroth

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    AI and Jobs: How to Make AI Work With Us, Not Against Us With Daron Acemoglu

    AI and Jobs: How to Make AI Work With Us, Not Against Us With Daron Acemoglu

    Tech companies say that AI will lead to massive economic productivity gains. But as we know from the first digital revolution, that’s not what happened. Can we do better this time around?

    RECOMMENDED MEDIA

    Power and Progress by Daron Acemoglu and Simon Johnson Professor Acemoglu co-authored a bold reinterpretation of economics and history that will fundamentally change how you see the world

    Can we Have Pro-Worker AI? Professor Acemoglu co-authored this paper about redirecting AI development onto the human-complementary path

    Rethinking Capitalism: In Conversation with Daron Acemoglu The Wheeler Institute for Business and Development hosted Professor Acemoglu to examine how technology affects the distribution and growth of resources while being shaped by economic and social incentives

    RECOMMENDED YUA EPISODES

    1. The Three Rules of Humane Tech
    2. The Tech We Need for 21st Century Democracy
    3. Can We Govern AI?
    4. An Alternative to Silicon Valley Unicorns

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Logo

    © 2024 Podcastworld. All rights reserved

    Stay up to date

    For any inquiries, please email us at hello@podcastworld.io