Logo
    Search

    Podcast Summary

    • Executive Order on AI: Addressing Multifaceted ChallengesThe Biden administration's executive order on AI covers privacy, bias, national security, and job automation, mandating data sharing, enhancing life sciences standards, and launching a talent search.

      The US government, under President Biden, has taken a significant step towards addressing the multifaceted challenges posed by AI through an 111-page executive order. This order covers various aspects of AI's impact on society, including privacy, bias, national security, and job automation. Key components include mandating companies to share testing data and notify the government about training new large-scale AI models, enhancing gene synthesis standards in life sciences, and launching an AI talent search. The order signals a first step towards addressing each issue, marking the end of the beginning in the ongoing efforts to mitigate potential harms and harness the benefits of AI.

    • Executive order on AI regulation: A step forward but not a complete solutionThe recent executive order on AI regulation is a significant step forward, but it lacks enforceable measures and requires congressional action for effective governance in the digital age

      The recent executive order on AI regulation signed by President Biden is a significant step forward, but it's not a complete solution. Tom Wheeler, a tech industry expert and former FCC chairman, praised the leadership shown by the administration but emphasized the need for enforceable regulations, which can only be achieved through congressional action. The executive order covers various aspects of AI regulation, including the use of the Defense Production Act for government involvement in advanced AI models. However, the majority of the order consists of guidance rather than enforceable measures. To effectively address the challenges posed by AI and other digital technologies, there is a need to rethink and update our governance structures to better match the 21st century.

    • Executive order targets AI risks through government oversightThe recent executive order requires certain companies to report AI training processes and vulnerabilities, invokes Defense Production Act for national security, and uses federal funding conditions to enforce new practices, addressing potential risks in various industries.

      The recent executive order related to AI development includes a requirement for certain companies building advanced AI models to inform the government about their training processes and share vulnerability findings. This is a step towards government oversight, although it falls short of the extensive testing and licensing proposals. The Defense Production Act is being invoked to address national security implications, signaling the value of AI and potential risks. Additionally, federal funding conditions are being used to enforce these new practices, which could impact various industries from biotech to long-term AI applications. This is the "hydra" of risks associated with AI that the executive order aims to address.

    • Biden administration's executive order on AI: A significant step forwardThe order sets new standards for AI research, emphasizes government's role as both regulator and consumer, but the industry's contradictory stance on regulation remains a complex dynamic.

      The recent executive order on AI issued by the Biden administration is a significant step forward in addressing the potential risks posed by artificial intelligence, particularly in the intersection of AI and biology. The order represents the government's role as both a regulator and the largest consumer, and it sets new standards for genetic synthesis screening and tighter controls on materials used in AI research. However, relying solely on government procurement or funding has its limitations, as it only reaches those who are being funded. The order also highlights the complex dynamic between the tech industry and regulation, as some companies publicly endorse regulation while privately opposing it. This dynamic, reminiscent of the classic move in Casablanca, showcases the industry's contradictory stance on regulation. As a former regulator, this dynamic is not surprising, and it is essential to have an open and nuanced conversation about the specifics of regulation and its potential impact on innovation.

    • Balancing Industry Expertise and Government Regulation for Net Neutrality and AIStriking a balance between industry expertise and government regulation is crucial for net neutrality and AI, ensuring the public interest is protected while allowing technology to evolve rapidly.

      Net neutrality is a complex issue with various interpretations, and defining it requires careful consideration of the public interest. The ongoing development of AI adds another layer of complexity, necessitating a significant hiring spree and the creation of an AI Council in the White House. However, some argue that a new federal agency with rule-making authority is needed to address the realities of a digital economy and society. It's crucial to strike a balance between industry expertise and government regulation to ensure the public interest is protected, even as technology continues to evolve rapidly. The comparison to the early days of digital platforms and the argument that only companies can understand and regulate AI is not a valid excuse, as history shows that governments have successfully regulated complex technologies before.

    • Balancing progress and protection in AI regulationGovernments need to adopt digital management techniques like transparency, agility, and risk-based assessments to effectively regulate AI and strike a balance between progress and protection.

      While innovators drive progress and rule-breaking is essential for advancement, there comes a time when regulations are necessary to protect individual rights and the public interest. The history of industrialization shows that rigid, micromanaging regulatory agencies, modeled after corporate management practices, were not effective in the digital age. Instead, governments need to adopt digital management techniques such as transparency, agility, and risk-based assessments. This agility of government must match the agility of the technology being governed to avoid losing control. Moving towards a more agile government with respect to AI can be achieved through regulatory agencies and a focus on flexibility and understanding the need for adaptability in a rapidly evolving technological landscape. Innovators and regulators must work together to strike a balance between progress and protection.

    • Adapting to the digital ageAs the digital world evolves, it's essential to keep laws and institutions up-to-date and adaptable to new challenges, while also recognizing potential risks and challenges that come with rapid innovation.

      As our world becomes increasingly digital, the need to adapt and protect against new threats is more important than ever. Just as in the industrial age, when society faced new challenges and required innovative solutions, we must now approach digital challenges with the same level of commitment and creativity. Quoting Thomas Jefferson, "laws and institutions must go hand in hand with the progress of the human mind," and as new discoveries are made and circumstances change, we must also evolve our institutions and laws to keep pace. However, attempting to match the agility of tech companies in government can also lead to potential failures and political risks. The digital world presents us with "wicked problems," which are complex and continually evolving, and require ongoing innovation and adaptation in our oversight and solutions. It's crucial that we remain committed to finding new and effective ways to address these challenges, while also recognizing the potential risks and challenges that come with rapid innovation.

    • Making mistakes is inevitable, but learning from them is crucial for solving wicked problemsDespite the risks and complexities of wicked problems like climate change and AI development, it's important to keep trying and learning from mistakes to find solutions. Churchill's leadership during WWII and Taiwan's use of AI for governance are examples of making progress despite imperfections.

      Despite the complexity and potential risks associated with wicked problems like climate change and AI development, it is crucial not to give up or become paralyzed by fear of making mistakes. Instead, we should strive to find solutions and approaches, even if they are imperfect, and be prepared to learn from and adapt to the consequences. Winston Churchill's leadership during World War II serves as an example of making mistakes but ultimately leading to positive outcomes. In the context of AI governance, upgrading governance itself using AI tools, as Audrey Tang is doing in Taiwan, could be a promising trailhead towards hope. By matching the agility of technology with the agility of governance, we can better navigate the rapidly evolving landscape of AI and mitigate potential harms. Ultimately, the end of the beginning is an opportunity to learn, adapt, and make progress towards finding effective solutions to wicked problems.

    • Embracing Technological Changes in GovernanceThe Executive Order for an AI officer in every agency is a step towards a more technologically advanced and humane future, but challenges and resistance are expected. The Center For Humane Technology is committed to helping catalyze this change.

      As technology continues to evolve and shape the way we govern, it's crucial for governments to adapt and embrace these changes. The Executive Order calling for an AI officer in every agency is a step in the right direction, but it's important to remember that this is just the beginning. There will be challenges and resistance, but we must continue to innovate and make progress. As the Spanish poet said, "Paths are made by walking." We need to start taking steps towards a more technologically advanced and humane future, even if we don't have all the answers yet. The Center For Humane Technology is committed to helping catalyze this change, and we encourage everyone to join us on this journey towards a more humane and technologically advanced world.

    Recent Episodes from Your Undivided Attention

    Why Are Migrants Becoming AI Test Subjects? With Petra Molnar

    Why Are Migrants Becoming AI Test Subjects? With Petra Molnar

    Climate change, political instability, hunger. These are just some of the forces behind an unprecedented refugee crisis that’s expected to include over a billion people by 2050. In response to this growing crisis, wealthy governments like the US and the EU are employing novel AI and surveillance technologies to slow the influx of migrants at their borders. But will this rollout stop at the border?

    In this episode, Tristan and Aza sit down with Petra Molnar to discuss how borders have become a proving ground for the sharpest edges of technology, and especially AI. Petra is an immigration lawyer and co-creator of the Migration and Technology Monitor. Her new book is “The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence.”

    RECOMMENDED MEDIA

    The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence

    Petra’s newly published book on the rollout of high risk tech at the border.

    Bots at the Gate

    A report co-authored by Petra about Canada’s use of AI technology in their immigration process.

    Technological Testing Grounds

    A report authored by Petra about the use of experimental technology in EU border enforcement.

    Startup Pitched Tasing Migrants from Drones, Video Reveals

    An article from The Intercept, containing the demo for Brinc’s taser drone pilot program.

    The UNHCR

    Information about the global refugee crisis from the UN.

    RECOMMENDED YUA EPISODES

    War is a Laboratory for AI with Paul Scharre

    No One is Immune to AI Harms with Dr. Joy Buolamwini

    Can We Govern AI? With Marietje Schaake

    CLARIFICATION:

    The iBorderCtrl project referenced in this episode was a pilot project that was discontinued in 2019

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    This week, a group of current and former employees from Open AI and Google Deepmind penned an open letter accusing the industry’s leading companies of prioritizing profits over safety. This comes after a spate of high profile departures from OpenAI, including co-founder Ilya Sutskever and senior researcher Jan Leike, as well as reports that OpenAI has gone to great lengths to silence would-be whistleblowers. 

    The writers of the open letter argue that researchers have a “right to warn” the public about AI risks and laid out a series of principles that would protect that right. In this episode, we sit down with one of those writers: William Saunders, who left his job as a research engineer at OpenAI in February. William is now breaking the silence on what he saw at OpenAI that compelled him to leave the company and to put his name to this letter. 

    RECOMMENDED MEDIA 

    The Right to Warn Open Letter 

    My Perspective On "A Right to Warn about Advanced Artificial Intelligence": A follow-up from William about the letter

     Leaked OpenAI documents reveal aggressive tactics toward former employees: An investigation by Vox into OpenAI’s policy of non-disparagement.

    RECOMMENDED YUA EPISODES

    1. A First Step Toward AI Regulation with Tom Wheeler 
    2. Spotlight on AI: What Would It Take For This to Go Well? 
    3. Big Food, Big Tech and Big AI with Michael Moss 
    4. Can We Govern AI? with Marietje Schaake

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    War is a Laboratory for AI with Paul Scharre

    War is a Laboratory for AI with Paul Scharre

    Right now, militaries around the globe are investing heavily in the use of AI weapons and drones.  From Ukraine to Gaza, weapons systems with increasing levels of autonomy are being used to kill people and destroy infrastructure and the development of fully autonomous weapons shows little signs of slowing down. What does this mean for the future of warfare? What safeguards can we put up around these systems? And is this runaway trend toward autonomous warfare inevitable or will nations come together and choose a different path? In this episode, Tristan and Daniel sit down with Paul Scharre to try to answer some of these questions. Paul is a former Army Ranger, the author of two books on autonomous weapons and he helped the Department of Defense write a lot of its policy on the use of AI in weaponry. 

    RECOMMENDED MEDIA

    Four Battlegrounds: Power in the Age of Artificial Intelligence: Paul’s book on the future of AI in war, which came out in 2023.

    Army of None: Autonomous Weapons and the Future of War: Paul’s 2018 book documenting and predicting the rise of autonomous and semi-autonomous weapons as part of modern warfare.

    The Perilous Coming Age of AI Warfare: How to Limit the Threat of Autonomous Warfare: Paul’s article in Foreign Affairs based on his recent trip to the battlefield in Ukraine.

    The night the world almost almost ended: A BBC documentary about Stanislav Petrov’s decision not to start nuclear war.

    AlphaDogfight Trials Final Event: The full simulated dogfight between an AI and human pilot. The AI pilot swept, 5-0.

    RECOMMENDED YUA EPISODES

    1. The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen Hao
    2. Can We Govern AI? with Marietje Schaake
    3. Big Food, Big Tech and Big AI with Michael Moss
    4. The Invisible Cyber-War with Nicole Perlroth

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    AI and Jobs: How to Make AI Work With Us, Not Against Us With Daron Acemoglu

    AI and Jobs: How to Make AI Work With Us, Not Against Us With Daron Acemoglu

    Tech companies say that AI will lead to massive economic productivity gains. But as we know from the first digital revolution, that’s not what happened. Can we do better this time around?

    RECOMMENDED MEDIA

    Power and Progress by Daron Acemoglu and Simon Johnson Professor Acemoglu co-authored a bold reinterpretation of economics and history that will fundamentally change how you see the world

    Can we Have Pro-Worker AI? Professor Acemoglu co-authored this paper about redirecting AI development onto the human-complementary path

    Rethinking Capitalism: In Conversation with Daron Acemoglu The Wheeler Institute for Business and Development hosted Professor Acemoglu to examine how technology affects the distribution and growth of resources while being shaped by economic and social incentives

    RECOMMENDED YUA EPISODES

    1. The Three Rules of Humane Tech
    2. The Tech We Need for 21st Century Democracy
    3. Can We Govern AI?
    4. An Alternative to Silicon Valley Unicorns

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Jonathan Haidt On How to Solve the Teen Mental Health Crisis

    Jonathan Haidt On How to Solve the Teen Mental Health Crisis

    Suicides. Self harm. Depression and anxiety. The toll of a social media-addicted, phone-based childhood has never been more stark. It can be easy for teens, parents and schools to feel like they’re trapped by it all. But in this conversation with Tristan Harris, author and social psychologist Jonathan Haidt makes the case that the conditions that led to today’s teenage mental health crisis can be turned around – with specific, achievable actions we all can take starting today.

    This episode was recorded live at the San Francisco Commonwealth Club.  

    Correction: Tristan mentions that 40 Attorneys General have filed a lawsuit against Meta for allegedly fostering addiction among children and teens through their products. However, the actual number is 42 Attorneys General who are taking legal action against Meta.

    Clarification: Jonathan refers to the Wait Until 8th pledge. By signing the pledge, a parent  promises not to give their child a smartphone until at least the end of 8th grade. The pledge becomes active once at least ten other families from their child’s grade pledge the same.

    Chips Are the Future of AI. They’re Also Incredibly Vulnerable. With Chris Miller

    Chips Are the Future of AI. They’re Also Incredibly Vulnerable. With Chris Miller

    Beneath the race to train and release more powerful AI models lies another race: a race by companies and nation-states to secure the hardware to make sure they win AI supremacy. 

    Correction: The latest available Nvidia chip is the Hopper H100 GPU, which has 80 billion transistors. Since the first commercially available chip had four transistors, the Hopper actually has 20 billion times that number. Nvidia recently announced the Blackwell, which boasts 208 billion transistors - but it won’t ship until later this year.

    RECOMMENDED MEDIA 

    Chip War: The Fight For the World’s Most Critical Technology by Chris Miller

    To make sense of the current state of politics, economics, and technology, we must first understand the vital role played by chips

    Gordon Moore Biography & Facts

    Gordon Moore, the Intel co-founder behind Moore's Law, passed away in March of 2023

    AI’s most popular chipmaker Nvidia is trying to use AI to design chips faster

    Nvidia's GPUs are in high demand - and the company is using AI to accelerate chip production

    RECOMMENDED YUA EPISODES

    Future-proofing Democracy In the Age of AI with Audrey Tang

    How Will AI Affect the 2024 Elections? with Renee DiResta and Carl Miller

    The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen Hao

    Protecting Our Freedom of Thought with Nita Farahany

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

     

     

    Future-proofing Democracy In the Age of AI with Audrey Tang

    Future-proofing Democracy In the Age of AI with Audrey Tang

    What does a functioning democracy look like in the age of artificial intelligence? Could AI even be used to help a democracy flourish? Just in time for election season, Taiwan’s Minister of Digital Affairs Audrey Tang returns to the podcast to discuss healthy information ecosystems, resilience to cyberattacks, how to “prebunk” deepfakes, and more. 

    RECOMMENDED MEDIA 

    Testing Theories of American Politics: Elites, Interest Groups, and Average Citizens by Martin Gilens and Benjamin I. Page

    This academic paper addresses tough questions for Americans: Who governs? Who really rules? 

    Recursive Public

    Recursive Public is an experiment in identifying areas of consensus and disagreement among the international AI community, policymakers, and the general public on key questions of governance

    A Strong Democracy is a Digital Democracy

    Audrey Tang’s 2019 op-ed for The New York Times

    The Frontiers of Digital Democracy

    Nathan Gardels interviews Audrey Tang in Noema

    RECOMMENDED YUA EPISODES 

    Digital Democracy is Within Reach with Audrey Tang

    The Tech We Need for 21st Century Democracy with Divya Siddarth

    How Will AI Affect the 2024 Elections? with Renee DiResta and Carl Miller

    The AI Dilemma

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    U.S. Senators Grilled Social Media CEOs. Will Anything Change?

    U.S. Senators Grilled Social Media CEOs. Will Anything Change?

    Was it political progress, or just political theater? The recent Senate hearing with social media CEOs led to astonishing moments — including Mark Zuckerberg’s public apology to families who lost children following social media abuse. Our panel of experts, including Facebook whistleblower Frances Haugen, untangles the explosive hearing, and offers a look ahead, as well. How will this hearing impact protocol within these social media companies? How will it impact legislation? In short: will anything change?

    Clarification: Julie says that shortly after the hearing, Meta’s stock price had the biggest increase of any company in the stock market’s history. It was the biggest one-day gain by any company in Wall Street history.

    Correction: Frances says it takes Snap three or four minutes to take down exploitative content. In Snap's most recent transparency report, they list six minutes as the median turnaround time to remove exploitative content.

    RECOMMENDED MEDIA 

    Get Media Savvy

    Founded by Julie Scelfo, Get Media Savvy is a non-profit initiative working to establish a healthy media environment for kids and families

    The Power of One by Frances Haugen

    The inside story of France’s quest to bring transparency and accountability to Big Tech

    RECOMMENDED YUA EPISODES

    Real Social Media Solutions, Now with Frances Haugen

    A Conversation with Facebook Whistleblower Frances Haugen

    Are the Kids Alright?

    Social Media Victims Lawyer Up with Laura Marquez-Garrett

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

     

     

    Taylor Swift is Not Alone: The Deepfake Nightmare Sweeping the Internet

    Taylor Swift is Not Alone: The Deepfake Nightmare Sweeping the Internet

    Over the past year, a tsunami of apps that digitally strip the clothes off real people has hit the market. Now anyone can create fake non-consensual sexual images in just a few clicks. With cases proliferating in high schools, guest presenter Laurie Segall talks to legal scholar Mary Anne Franks about the AI-enabled rise in deep fake porn and what we can do about it. 

    Correction: Laurie refers to the app 'Clothes Off.' It’s actually named Clothoff. There are many clothes remover apps in this category.

    RECOMMENDED MEDIA 

    Revenge Porn: The Cyberwar Against Women

    In a five-part digital series, Laurie Segall uncovers a disturbing internet trend: the rise of revenge porn

    The Cult of the Constitution

    In this provocative book, Mary Anne Franks examines the thin line between constitutional fidelity and constitutional fundamentalism

    Fake Explicit Taylor Swift Images Swamp Social Media

    Calls to protect women and crack down on the platforms and technology that spread such images have been reignited

    RECOMMENDED YUA EPISODES 

    No One is Immune to AI Harms

    Esther Perel on Artificial Intimacy

    Social Media Victims Lawyer Up

    The AI Dilemma

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Can Myth Teach Us Anything About the Race to Build Artificial General Intelligence? With Josh Schrei

    Can Myth Teach Us Anything About the Race to Build Artificial General Intelligence? With Josh Schrei

    We usually talk about tech in terms of economics or policy, but the casual language tech leaders often use to describe AI — summoning an inanimate force with the powers of code — sounds more... magical. So, what can myth and magic teach us about the AI race? Josh Schrei, mythologist and host of The Emerald podcast,  says that foundational cultural tales like "The Sorcerer's Apprentice" or Prometheus teach us the importance of initiation, responsibility, human knowledge, and care.  He argues these stories and myths can guide ethical tech development by reminding us what it is to be human. 

    Correction: Josh says the first telling of "The Sorcerer’s Apprentice" myth dates back to ancient Egypt, but it actually dates back to ancient Greece.

    RECOMMENDED MEDIA 

    The Emerald podcast

    The Emerald explores the human experience through a vibrant lens of myth, story, and imagination

    Embodied Ethics in The Age of AI

    A five-part course with The Emerald podcast’s Josh Schrei and School of Wise Innovation’s Andrew Dunn

    Nature Nurture: Children Can Become Stewards of Our Delicate Planet

    A U.S. Department of the Interior study found that the average American kid can identify hundreds of corporate logos but not plants and animals

    The New Fire

    AI is revolutionizing the world - here's how democracies can come out on top. This upcoming book was authored by an architect of President Biden's AI executive order

    RECOMMENDED YUA EPISODES 

    How Will AI Affect the 2024 Elections?

    The AI Dilemma

    The Three Rules of Humane Tech

    AI Myths and Misconceptions

     

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Related Episodes

    Spotlight: The Three Rules of Humane Tech

    Spotlight: The Three Rules of Humane Tech

    In our previous episode, we shared a presentation Tristan and Aza recently delivered to a group of influential technologists about the race happening in AI. In that talk, they introduced the Three Rules of Humane Technology. In this Spotlight episode, we’re taking a moment to explore these three rules more deeply in order to clarify what it means to be a responsible technologist in the age of AI.

    Correction: Aza mentions infinite scroll being in the pockets of 5 billion people, implying that there are 5 billion smartphone users worldwide. The number of smartphone users worldwide is actually 6.8 billion now.

     

    RECOMMENDED MEDIA 

    We Think in 3D. Social Media Should, Too
    Tristan Harris writes about a simple visual experiment that demonstrates the power of one’s point of view

    Let’s Think About Slowing Down AI

    Katja Grace’s piece about how to avert doom by not building the doom machine

    If We Don’t Master AI, It Will Master Us

    Yuval Harari, Tristan Harris and Aza Raskin call upon world leaders to respond to this moment at the level of challenge it presents in this New York Times opinion piece

     

    RECOMMENDED YUA EPISODES 

    The AI Dilemma

    Synthetic humanity: AI & What’s At Stake

     

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Ethical, Responsible, and Trustworthy AI

    Ethical, Responsible, and Trustworthy AI
    In this episode of the Data Masters podcast, Anthony Deighton interviews Dr. Eva-Marie Muller-Stuler, a partner in Data and Analytics at Ernst and Young. They discuss how the advent of advanced language learning models (LLMs) like ChatGPT, Bard, and others, requires new thinking in ethical, responsible, and trustworthy AI.

    TikTok’s Uncertain Future with Jacob Ward

    TikTok’s Uncertain Future with Jacob Ward
    TikTok is one of the fastest growing social media platforms in the world, and now has over a billion users worldwide. But its future in the United States remains in limbo. The Biden administration, citing national security concerns, has demanded that the Chinese-owned company be sold, or face a federal ban. Montana lawmakers have already passed legislation banning the platform on personal devices, sending the bill to the governor. A lot of questions remain about the feasibility of statewide and federal bans, and why, exactly, do U.S. policymakers view this platform, that started as a lip syncing app, as such a threat? Jacob Ward is the NBC News technology correspondent and is author of “In The Loop: How Technology Is Creating a World Without Choices and How to Fight Back.” He joins WITHpod to discuss what’s driven the app’s exponential growth, the company’s lack of transparency in the past, the case for and against it, what could be ahead on the regulatory front and more.

    The Promise and Peril of Open Source AI with Elizabeth Seger and Jeffrey Ladish

    The Promise and Peril of Open Source AI with Elizabeth Seger and Jeffrey Ladish

    As AI development races forward, a fierce debate has emerged over open source AI models. So what does it mean to open-source AI? Are we opening Pandora’s box of catastrophic risks? Or is open-sourcing AI the only way we can democratize its benefits and dilute the power of big tech? 

    Correction: When discussing the large language model Bloom, Elizabeth said it functions in 26 different languages. Bloom is actually able to generate text in 46 natural languages and 13 programming languages - and more are in the works.

     

    RECOMMENDED MEDIA 

    Open-Sourcing Highly Capable Foundation Models

    This report, co-authored by Elizabeth Seger, attempts to clarify open-source terminology and to offer a thorough analysis of risks and benefits from open-sourcing AI

    BadLlama: cheaply removing safety fine-tuning from Llama 2-Chat 13B

    This paper, co-authored by Jeffrey Ladish, demonstrates that it’s possible to effectively undo the safety fine-tuning from Llama 2-Chat 13B with less than $200 while retaining its general capabilities

    Centre for the Governance of AI

    Supports governments, technology companies, and other key institutions by producing relevant research and guidance around how to respond to the challenges posed by AI

    AI: Futures and Responsibility (AI:FAR)

    Aims to shape the long-term impacts of AI in ways that are safe and beneficial for humanity

    Palisade Research

    Studies the offensive capabilities of AI systems today to better understand the risk of losing control to AI systems forever

     

    RECOMMENDED YUA EPISODES

    A First Step Toward AI Regulation with Tom Wheeler

    No One is Immune to AI Harms with Dr. Joy Buolamwini

    Mustafa Suleyman Says We Need to Contain AI. How Do We Do It?

    The AI Dilemma

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_