Logo

    This Moment in AI: How We Got Here and Where We’re Going

    enAugust 12, 2024
    What are the challenges posed by rapid AI advancement?
    How does 'The AI Dilemma' relate to social media?
    What is the significance of transformer technology in AI?
    What concerns exist about the emergence of Artificial General Intelligence?
    How can society responsibly integrate advancing AI technology?

    Podcast Summary

    • AI societal challengesThe rapid advancement of AI technology poses significant challenges to society, and the race to roll out AI systems quickly can lead to unsafe and harmful outcomes, necessitating careful consideration and regulation

      The rapid advancement of AI technology poses significant challenges to society, as competitive forces drive companies to roll out AI systems as quickly as possible, often before society is prepared to handle the consequences. This was a key theme of a viral video, "The AI Dilemma," released by Tristan Harris and Azar Nafisi of the Center for Humane Technology in March 2023. They argued that, similar to social media, the race to onboard as many people as possible and take shortcuts can lead to unsafe and harmful outcomes. Additionally, the development of transformer technology in 2017, which allows AI to understand a universal language, has accelerated progress and made it possible for AI to create or describe anything. However, this rapid advancement also comes with the risk of societal overwhelm and the inability to keep up with the pace of change. The hosts also discussed their travels and conversations around the world regarding AI for Good and social media reform, particularly the passage of legislation around kids' online safety. Overall, the conversation highlighted the need for careful consideration and regulation as AI continues to evolve and impact society.

    • AI Development and AGIRapid advancements in AI technology, specifically AGI, are leading to intense competition among companies to develop larger and more complex models, with significant investments and expected impacts on the economy and workforce

      The development of advanced AI technology, specifically Artificial General Intelligence (AGI), is progressing rapidly, with significant investments leading to increased capabilities. Companies are competing to train larger and more complex models, with rumored training runs costing from hundreds of millions to tens of billions of dollars. The Bay Area tech community is deeply engaged in this pursuit, with some believing that continued scaling will lead to AGI capable of replacing human beings in various economic tasks. However, there is ongoing debate about whether additional breakthroughs are necessary or imminent. The emergence of AGI is expected to happen sooner rather than later, and it's crucial to prepare for its potential impacts. The conversations in the Bay Area reflect a sense of urgency and excitement about the future of AI.

    • AI EthicsAs AI capabilities advance rapidly, it's crucial to address ethical concerns and set the right incentives before it becomes deeply entrenched in society, ensuring ethical considerations are prioritized in continued research and investment.

      We are currently at a critical juncture in the development of Artificial Intelligence (AI), and it's essential to address potential ethical concerns and set the right incentives before AI becomes deeply entrenched in our society. The capabilities of AI are advancing at an unprecedented rate, but the integration of AI into our daily lives may take longer. Skepticism about the impact of AI is warranted due to the hype surrounding the technology and the slow diffusion of AI into our economy. However, the raw capabilities of AI should not be underestimated, as progress is moving faster than many realize. The development of AI is also facing challenges related to the availability of data. While there may be a surplus of data on the open internet, companies are racing for proprietary data sets and exploring multimodal models that can process various types of data beyond text. These open questions highlight the need for continued research and investment in AI while ensuring ethical considerations are prioritized.

    • AI-generated cultureAI-generated content is increasingly dominating human-made content in the attention economy of social media, raising concerns about humans losing control

      While there are concerns about the use of synthetic data in AI models and the potential for a downward spiral effect, the creation of synthetic data specifically for benchmarking and improving model performance is a different issue. However, the increasing prevalence of synthetic data in our culture, particularly in the attention economy of social media, raises concerns about humans losing control as AI-generated content becomes more engaging and out-competes human-made content. This could lead to a dominant form of culture where humans may no longer be in control. It's important to note that this is not about the value of AI-generated content being superior to human-made content, but rather its ability to effectively play the attention economy game set up by social media platforms. This is already happening, and it's a terrifying prospect. Despite this, humans will still have artisanal art and offline spaces free from AI-generated content. The recent AI conference you attended was significant, as experts discussed the latest advancements and potential implications of AI technology. Stay tuned for more insights from that event.

    • AI funding imbalanceThe gap between funding for advancing AI and ensuring its safety and security is significant, estimated to be around 1,000 to 1 or even 2,000 to 1, and there's a need for more open conversations about the incentives and potential harms of AI.

      Learning from the UN AI for Good Conference in Geneva is the significant imbalance in funding between advancing the power of AI and ensuring its safety and security. Stuart Russell, a renowned AI expert, emphasized this gap, which he estimates to be around 1,000 to 1, or even 2,000 to 1. This contrasts with other industries, such as nuclear, where extensive safety measures are in place. The conference highlighted the importance of acknowledging both the benefits and risks of AI. However, there seemed to be a tendency to focus only on the opportunities, leading to a lack of attention to potential harms. This "half lighting" approach was evident in various panels, including discussions on open source. Tristan Harris and I emphasized the need for a more balanced perspective, as failing to acknowledge the risks could lead to unexpected negative consequences. Many attendees appreciated our perspective, underscoring the need for more open conversations about the incentives and potential harms of AI. Additionally, it was inspiring to meet individuals and organizations, such as the head of IKEA's responsible AI innovation and the Cuban minister, who have integrated the AI Dilemma into their policies.

    • AI regulation discussionsIndividual actions inspired by thought-provoking podcasts can lead to meaningful collaborations and initiatives in addressing complex global issues like AI regulation

      The impact of thought-provoking discussions, like those presented in podcasts, can lead to meaningful actions and collaborations in addressing complex global issues, such as AI regulation. The Swiss diplomat Nina Frey, inspired by a podcast episode on AI ethics, initiated the Swiss School for Trust and Transparency, which has since led to a virtual network for AI research and resource pooling. This is a powerful reminder that individual actions, no matter how small, can contribute to larger movements and positive change. The focus on good intentions with technology is essential, but it's equally important to consider the incentives driving technology adoption and design. The work on AI builds upon the earlier efforts of the organization in addressing issues with social media, highlighting the continuity and interconnectedness of these challenges.

    • Technology's negative consequencesWhile celebrating AI's potential benefits, it's important to address potential harms like shortened attention spans, division, and weakening of the information commons. Policymaking and investment can help mitigate these issues and build a stronger technological future.

      While we celebrate the good things technology, particularly AI, can bring, it's crucial to consider the potential negative consequences and work to mitigate them before they undermine society. The excitement around social media 15 years ago was similar to that of AI today, but the incentives underlying these technologies have the potential to create systemic harm. This includes shortened attention spans, division, and a weakening of the information commons. The challenge is to ensure that technology, specifically AI, is rolled out in a way that strengthens societies rather than weakens them. We should focus on identifying societal fragilities that new technology may expose and work to address them. This is not an anti-technology or anti-AI stance, but rather a call to action to build a taller and more impressive technological future in a responsible way. We should not keep pulling out blocks from the bottom of the tower to build on the top, but rather find alternative ways to build that tower. Policymaking and investment in areas like federal liability for AI are potential solutions to these issues.

    • AI and social media safety governanceUrgent investment is needed for AI and social media platform safety governance, but the decision-making process and responsibility are unclear, potentially leading to prioritization of cost savings over safety.

      There is an urgent need for significant investment in governing and ensuring the safety of AI and social media platforms. Currently, only a small percentage of the vast budgets dedicated to making AI more capable are being spent on safety measures and governance. The decision-making process for safety spending is unclear, with no consensus on whether it should be a federal, international, or corporate responsibility. Without binding regulations, companies may prioritize cost savings over safety, leading to a dangerous race to the bottom. However, there have been some encouraging developments, including the Surgeon General's call for a warning label on social media and the passage of the Kids Online Safety Act in the United States Senate. These steps, driven by advocacy from parents and organizations, represent important progress in establishing new social norms and regulations for the tech industry.

    • Online safety for childrenThe pressing need for policies and solutions to ensure children's safety online, despite advancements in technology and efforts to address cyberbullying, addiction, and other risks.

      The issue of online safety, particularly for children, is a pressing concern that requires immediate attention. As technology continues to advance, the risks of cyberbullying, addiction, and even death become more prevalent. Kristin's story serves as a tragic reminder of the real-life consequences of these issues. Despite efforts from organizations like YOLO to address cyberbullying, more needs to be done. The passage of the Kids Online Safety and Privacy Act is a step in the right direction, but it's not enough. We must continue to advocate for policies and solutions that prioritize the well-being of children online. The misaligned incentives of social media, including the use of AI, have not been solved, and we must be vigilant in ensuring that technology is used in a way that is safe, ethical, and humane for all users.

    Recent Episodes from Your Undivided Attention

    AI Is Moving Fast. We Need Laws that Will Too.

    AI Is Moving Fast. We Need Laws that Will Too.

    AI is moving fast. And as companies race to rollout newer, more capable models–with little regard for safety–the downstream risks of those models become harder and harder to counter. On this week’s episode of Your Undivided Attention, CHT’s policy director Casey Mock comes on the show to discuss a new legal framework to incentivize better AI, one that holds AI companies liable for the harms of their products. 

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    RECOMMENDED MEDIA

    The CHT Framework for Incentivizing Responsible AI Development

    Further Reading on Air Canada’s Chatbot Fiasco 

    Further Reading on the Elon Musk Deep Fake Scams 

    The Full Text of SB1047, California’s AI Regulation Bill 

    Further reading on SB1047 

    RECOMMENDED YUA EPISODES

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    Can We Govern AI? with Marietje Schaake

    A First Step Toward AI Regulation with Tom Wheeler

    Correction: Casey incorrectly stated the year that the US banned child labor as 1937. It was banned in 1938.

    Esther Perel on Artificial Intimacy (rerun)

    Esther Perel on Artificial Intimacy (rerun)

    [This episode originally aired on August 17, 2023] For all the talk about AI, we rarely hear about how it will change our relationships. As we swipe to find love and consult chatbot therapists, acclaimed psychotherapist and relationship expert Esther Perel warns that there’s another harmful “AI” on the rise — Artificial Intimacy — and how it is depriving us of real connection. Tristan and Esther discuss how depending on algorithms can fuel alienation, and then imagine how we might design technology to strengthen our social bonds.

    RECOMMENDED MEDIA 

    Mating in Captivity by Esther Perel

    Esther's debut work on the intricacies behind modern relationships, and the dichotomy of domesticity and sexual desire

    The State of Affairs by Esther Perel

    Esther takes a look at modern relationships through the lens of infidelity

    Where Should We Begin? with Esther Perel

    Listen in as real couples in search of help bare the raw and profound details of their stories

    How’s Work? with Esther Perel

    Esther’s podcast that focuses on the hard conversations we're afraid to have at work 

    Lars and the Real Girl (2007)

    A young man strikes up an unconventional relationship with a doll he finds on the internet

    Her (2013)

    In a near future, a lonely writer develops an unlikely relationship with an operating system designed to meet his every need

    RECOMMENDED YUA EPISODES

    Big Food, Big Tech and Big AI with Michael Moss

    The AI Dilemma

    The Three Rules of Humane Tech

    Digital Democracy is Within Reach with Audrey Tang

     

    CORRECTION: Esther refers to the 2007 film Lars and the Real Doll. The title of the film is Lars and the Real Girl.
     

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Tech's Big Money Campaign is Getting Pushback with Margaret O'Mara and Brody Mullins

    Tech's Big Money Campaign is Getting Pushback with Margaret O'Mara and Brody Mullins

    Today, the tech industry is  the second-biggest lobbying power in Washington, DC, but that wasn’t true as recently as ten years ago. How did we get to this moment? And where could we be going next? On this episode of Your Undivided Attention, Tristan and Daniel sit down with historian Margaret O’Mara and journalist Brody Mullins to discuss how Silicon Valley has changed the nature of American lobbying. 

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    RECOMMENDED MEDIA

    The Wolves of K Street: The Secret History of How Big Money Took Over Big Government - Brody’s book on the history of lobbying.

    The Code: Silicon Valley and the Remaking of America - Margaret’s book on the historical relationship between Silicon Valley and Capitol Hill

    More information on the Google antitrust ruling

    More Information on KOSPA

    More information on the SOPA/PIPA internet blackout

    Detailed breakdown of Internet lobbying from Open Secrets

     

    RECOMMENDED YUA EPISODES

    U.S. Senators Grilled Social Media CEOs. Will Anything Change?

    Can We Govern AI? with Marietje Schaake
    The Race to Cooperation with David Sloan Wilson

     

    CORRECTION: Brody Mullins refers to AT&T as having a “hundred million dollar” lobbying budget in 2006 and 2007. While we couldn’t verify the size of their budget for lobbying, their actual lobbying spend was much less than this: $27.4m in 2006 and $16.5m in 2007, according to OpenSecrets.

     

    The views expressed by guests appearing on Center for Humane Technology’s podcast, Your Undivided Attention, are their own, and do not necessarily reflect the views of CHT. CHT does not support or oppose any candidate or party for election to public office

     

    This Moment in AI: How We Got Here and Where We’re Going

    This Moment in AI: How We Got Here and Where We’re Going

    It’s been a year and half since Tristan and Aza laid out their vision and concerns for the future of artificial intelligence in The AI Dilemma. In this Spotlight episode, the guys discuss what’s happened since then–as funding, research, and public interest in AI has exploded–and where we could be headed next. Plus, some major updates on social media reform, including the passage of the Kids Online Safety and Privacy Act in the Senate. 

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

     

    RECOMMENDED MEDIA

    The AI Dilemma: Tristan and Aza’s talk on the catastrophic risks posed by AI.

    Info Sheet on KOSPA: More information on KOSPA from FairPlay.

    Situational Awareness by Leopold Aschenbrenner: A widely cited blog from a former OpenAI employee, predicting the rapid arrival of AGI.

    AI for Good: More information on the AI for Good summit that was held earlier this year in Geneva. 

    Using AlphaFold in the Fight Against Plastic Pollution: More information on Google’s use of AlphaFold to create an enzyme to break down plastics. 

    Swiss Call For Trust and Transparency in AI: More information on the initiatives mentioned by Katharina Frey.

     

    RECOMMENDED YUA EPISODES

    War is a Laboratory for AI with Paul Scharre

    Jonathan Haidt On How to Solve the Teen Mental Health Crisis

    Can We Govern AI? with Marietje Schaake 

    The Three Rules of Humane Tech

    The AI Dilemma

     

    Clarification: Swiss diplomat Nina Frey’s full name is Katharina Frey.

     

     

    Your Undivided Attention
    enAugust 12, 2024

    Decoding Our DNA: How AI Supercharges Medical Breakthroughs and Biological Threats with Kevin Esvelt

    Decoding Our DNA: How AI Supercharges Medical Breakthroughs and Biological Threats with Kevin Esvelt

    AI has been a powerful accelerant for biological research, rapidly opening up new frontiers in medicine and public health. But that progress can also make it easier for bad actors to manufacture new biological threats. In this episode, Tristan and Daniel sit down with biologist Kevin Esvelt to discuss why AI has been such a boon for biologists and how we can safeguard society against the threats that AIxBio poses.

    RECOMMENDED MEDIA

    Sculpting Evolution: Information on Esvelt’s lab at MIT.

    SecureDNA: Esvelt’s free platform to provide safeguards for DNA synthesis.

    The Framework for Nucleic Acid Synthesis Screening: The Biden admin’s suggested guidelines for DNA synthesis regulation.

    Senate Hearing on Regulating AI Technology: C-SPAN footage of Dario Amodei’s testimony to Congress.

    The AlphaFold Protein Structure Database

    RECOMMENDED YUA EPISODES

    U.S. Senators Grilled Social Media CEOs. Will Anything Change?

    Big Food, Big Tech and Big AI with Michael Moss

    The AI Dilemma

    Clarification: President Biden’s executive order only applies to labs that receive funding from the federal government, not state governments.

    How to Think About AI Consciousness With Anil Seth

    How to Think About AI Consciousness With Anil Seth

    Will AI ever start to think by itself? If it did, how would we know, and what would it mean?

    In this episode, Dr. Anil Seth and Aza discuss the science, ethics, and incentives of artificial consciousness. Seth is Professor of Cognitive and Computational Neuroscience at the University of Sussex and the author of Being You: A New Science of Consciousness.

    RECOMMENDED MEDIA

    Frankenstein by Mary Shelley

    A free, plain text version of the Shelley’s classic of gothic literature.

    OpenAI’s GPT4o Demo

    A video from OpenAI demonstrating GPT4o’s remarkable ability to mimic human sentience.

    You Can Have the Blue Pill or the Red Pill, and We’re Out of Blue Pills

    The NYT op-ed from last year by Tristan, Aza, and Yuval Noah Harari outlining the AI dilemma. 

    What It’s Like to Be a Bat

    Thomas Nagel’s essay on the nature of consciousness.

    Are You Living in a Computer Simulation?

    Philosopher Nick Bostrom’s essay on the simulation hypothesis.

    Anthropic’s Golden Gate Claude

    A blog post about Anthropic’s recent discovery of millions of distinct concepts within their LLM, a major development in the field of AI interpretability.

    RECOMMENDED YUA EPISODES

    Esther Perel on Artificial Intimacy

    Talking With Animals... Using AI

    Synthetic Humanity: AI & What’s At Stake

    Why Are Migrants Becoming AI Test Subjects? With Petra Molnar

    Why Are Migrants Becoming AI Test Subjects? With Petra Molnar

    Climate change, political instability, hunger. These are just some of the forces behind an unprecedented refugee crisis that’s expected to include over a billion people by 2050. In response to this growing crisis, wealthy governments like the US and the EU are employing novel AI and surveillance technologies to slow the influx of migrants at their borders. But will this rollout stop at the border?

    In this episode, Tristan and Aza sit down with Petra Molnar to discuss how borders have become a proving ground for the sharpest edges of technology, and especially AI. Petra is an immigration lawyer and co-creator of the Migration and Technology Monitor. Her new book is “The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence.”

    RECOMMENDED MEDIA

    The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence

    Petra’s newly published book on the rollout of high risk tech at the border.

    Bots at the Gate

    A report co-authored by Petra about Canada’s use of AI technology in their immigration process.

    Technological Testing Grounds

    A report authored by Petra about the use of experimental technology in EU border enforcement.

    Startup Pitched Tasing Migrants from Drones, Video Reveals

    An article from The Intercept, containing the demo for Brinc’s taser drone pilot program.

    The UNHCR

    Information about the global refugee crisis from the UN.

    RECOMMENDED YUA EPISODES

    War is a Laboratory for AI with Paul Scharre

    No One is Immune to AI Harms with Dr. Joy Buolamwini

    Can We Govern AI? With Marietje Schaake

    CLARIFICATION:

    The iBorderCtrl project referenced in this episode was a pilot project that was discontinued in 2019

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    This week, a group of current and former employees from Open AI and Google Deepmind penned an open letter accusing the industry’s leading companies of prioritizing profits over safety. This comes after a spate of high profile departures from OpenAI, including co-founder Ilya Sutskever and senior researcher Jan Leike, as well as reports that OpenAI has gone to great lengths to silence would-be whistleblowers. 

    The writers of the open letter argue that researchers have a “right to warn” the public about AI risks and laid out a series of principles that would protect that right. In this episode, we sit down with one of those writers: William Saunders, who left his job as a research engineer at OpenAI in February. William is now breaking the silence on what he saw at OpenAI that compelled him to leave the company and to put his name to this letter. 

    RECOMMENDED MEDIA 

    The Right to Warn Open Letter 

    My Perspective On "A Right to Warn about Advanced Artificial Intelligence": A follow-up from William about the letter

     Leaked OpenAI documents reveal aggressive tactics toward former employees: An investigation by Vox into OpenAI’s policy of non-disparagement.

    RECOMMENDED YUA EPISODES

    1. A First Step Toward AI Regulation with Tom Wheeler 
    2. Spotlight on AI: What Would It Take For This to Go Well? 
    3. Big Food, Big Tech and Big AI with Michael Moss 
    4. Can We Govern AI? with Marietje Schaake

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    War is a Laboratory for AI with Paul Scharre

    War is a Laboratory for AI with Paul Scharre

    Right now, militaries around the globe are investing heavily in the use of AI weapons and drones.  From Ukraine to Gaza, weapons systems with increasing levels of autonomy are being used to kill people and destroy infrastructure and the development of fully autonomous weapons shows little signs of slowing down. What does this mean for the future of warfare? What safeguards can we put up around these systems? And is this runaway trend toward autonomous warfare inevitable or will nations come together and choose a different path? In this episode, Tristan and Daniel sit down with Paul Scharre to try to answer some of these questions. Paul is a former Army Ranger, the author of two books on autonomous weapons and he helped the Department of Defense write a lot of its policy on the use of AI in weaponry. 

    RECOMMENDED MEDIA

    Four Battlegrounds: Power in the Age of Artificial Intelligence: Paul’s book on the future of AI in war, which came out in 2023.

    Army of None: Autonomous Weapons and the Future of War: Paul’s 2018 book documenting and predicting the rise of autonomous and semi-autonomous weapons as part of modern warfare.

    The Perilous Coming Age of AI Warfare: How to Limit the Threat of Autonomous Warfare: Paul’s article in Foreign Affairs based on his recent trip to the battlefield in Ukraine.

    The night the world almost almost ended: A BBC documentary about Stanislav Petrov’s decision not to start nuclear war.

    AlphaDogfight Trials Final Event: The full simulated dogfight between an AI and human pilot. The AI pilot swept, 5-0.

    RECOMMENDED YUA EPISODES

    1. The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen Hao
    2. Can We Govern AI? with Marietje Schaake
    3. Big Food, Big Tech and Big AI with Michael Moss
    4. The Invisible Cyber-War with Nicole Perlroth

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    AI and Jobs: How to Make AI Work With Us, Not Against Us With Daron Acemoglu

    AI and Jobs: How to Make AI Work With Us, Not Against Us With Daron Acemoglu

    Tech companies say that AI will lead to massive economic productivity gains. But as we know from the first digital revolution, that’s not what happened. Can we do better this time around?

    RECOMMENDED MEDIA

    Power and Progress by Daron Acemoglu and Simon Johnson Professor Acemoglu co-authored a bold reinterpretation of economics and history that will fundamentally change how you see the world

    Can we Have Pro-Worker AI? Professor Acemoglu co-authored this paper about redirecting AI development onto the human-complementary path

    Rethinking Capitalism: In Conversation with Daron Acemoglu The Wheeler Institute for Business and Development hosted Professor Acemoglu to examine how technology affects the distribution and growth of resources while being shaped by economic and social incentives

    RECOMMENDED YUA EPISODES

    1. The Three Rules of Humane Tech
    2. The Tech We Need for 21st Century Democracy
    3. Can We Govern AI?
    4. An Alternative to Silicon Valley Unicorns

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Logo

    © 2024 Podcastworld. All rights reserved

    Stay up to date

    For any inquiries, please email us at hello@podcastworld.io