Logo
    Search

    Podcast Summary

    • Addressing tech issues needs collective action, especially from tech companiesUnderstand and address tech issues to change the system, as individual targeting practices continue to spread, profiting tech companies

      The issues the Center for Humane Technology addresses, such as election integrity, social isolation, and information environment toxification, require collective action from everyone, especially those within technology companies. The podcast, though growing rapidly, is just a small part of the solution. The methods of persuasion shown at Cambridge Analytica, like targeting individuals with customized messages, are now accessible to any candidate with a Facebook account. These practices will continue to spread, regardless of data security. Facebook, as the arms dealer, profits from their deployment. It's crucial for everyone to understand and address these issues to change the system.

    • Revolutionizing Political Engagement with Data-Driven Social MediaThe Obama campaign's use of social media for one-to-one interaction led to exponential growth in political engagement, and AI systems are now taking it further with real-time diarization and matching of sales calls to successful strategies.

      Data-driven targeted messaging through social media revolutionized political engagement and outreach during the Obama campaign. Prior to this, campaigns primarily used email for fundraising and speeches for reaching voters. However, the Obama campaign's use of social media led to exponential growth in political engagement, with thousands and tens of thousands of people attending rallies instead of just tens or hundreds. This was achieved through one-to-one interaction platforms, such as real-time text Q&A during debate watch parties. Now, AI systems are taking this a step further, with real-time diarization and matching of sales calls to successful strategies, creating an asymmetric power for the salesperson. This technology, which is already being used in B2B sales, is likely to be adopted for political campaigns in the future. Overall, the use of data and targeted messaging has significantly transformed the way campaigns reach and engage with voters.

    • Using Data to Expand Political ReachAdvanced data tools enabled political campaigns to proactively find new audiences and expand their reach by identifying individuals with similar behavioral patterns and interests to their existing supporters.

      The use of data and targeting in political campaigns has significantly evolved over the years. In 2010, campaigns relied on manual methods to target their messages to supporters based on their known interests and past engagements. This was a reactive approach, as campaigns could only reach out to those who had already shown support. However, with the development of tools like lookalike modeling and the Friends API, campaigns gained the ability to proactively find new audiences and expand their reach. These tools allowed campaigns to identify individuals with similar behavioral patterns and interests to their existing supporters, effectively finding "doppelgangers" in the crowd. The power of these tools became even more significant during the 2012 elections when over 40,000 developers were given access to most people on the platform's personal data through the Friends API. The exponential growth in advertising tools over the following years enabled campaigns to target audiences based on various categories, including race and religion. The key difference between the Obama campaigns in 2012 and 2016 was the intention of the messaging. While the 2012 campaign focused on speaking to supporters they already had, the 2016 campaign used these advanced tools to find new supporters and shape their perceptions.

    • Negative campaigning and voter suppression tactics in 2016 elections were more advanced and pervasiveNegative messaging was the focus of entire organizations in 2016, leading to a vulnerable democratic process. Tech companies have the power to mitigate negative effects with transformative decisions.

      The use of negative messaging, counter campaigning, and voter suppression tactics in political campaigns, particularly during the 2016 U.S. election and Brexit, was significantly more advanced and pervasive compared to the 2012 election. Obama's campaign had a strict policy against negative messaging, and all messages pushed out were positive and encouraging. However, in 2016, negative messaging was the focus of entire organizations, leading to a lack of accountability and a vulnerable democratic process. The lack of enforcement of election laws on technology platforms allowed for widespread abuse, with consequences that could impact the outcome of elections. Companies like Facebook and Twitter have the power to make transformative decisions, such as banning political advertising or implementing blackout periods before elections, to mitigate the negative effects of unregulated use of algorithmic machine optimized toxic speech.

    • Tech companies fear acknowledging potential dangers and instability of their ad systemsTech companies hesitate to halt political ads due to implications of systemic issues and underestimation of individual susceptibility to persuasion, leading to potential consequences on public perception and behaviors.

      Tech companies' reluctance to turn off political ads during critical times, even for a small percentage of time, stems from the fear of admitting the potential danger and instability of their advertising systems. This fear is rooted in the possibility that acknowledging the exponential number of targeted ads and the machines behind them could imply a fundamental issue with the entire system. Furthermore, individuals may underestimate their own susceptibility to persuasion, making it challenging to change their biases or behaviors through messaging campaigns. The impact of persuasive messaging can be measured by observing changes in people's activities and searches after being exposed to specific ads. Over time, biases shaped by such campaigns can influence behaviors, making it crucial for companies to consider the potential consequences of their advertising practices. For instance, the "Defeat Crooked Hillary" campaign, originating from Trump's phrases and developed by Cambridge Analytica, demonstrates how cognitive binding through negative phrases can significantly impact public perception and support for political candidates.

    • Exploiting Biases in Political CampaignsPolitical campaigns can manipulate people's biases, particularly fear, to influence their perceptions and behaviors. This was demonstrated during the 2016 US elections with targeted messaging based on personality traits.

      Our biases, particularly in the context of political campaigns, can be exploited in profound ways to influence people's perceptions and behaviors. The use of fear-based messaging, for instance, can be particularly effective for individuals high in neuroticism, a personality trait characterized by emotional instability and susceptibility to fear. This was demonstrated during the 2016 US elections when the Crooked Hillary campaign targeted neurotics with fear-based messaging, while using hopeful and assertive messages for other personality types. This tactic, which can be likened to relentlessly hammering on a person's perceived weakness, raises concerns about the ethical implications of such manipulation in our democracies. The fact that the same speaker can deliver two completely contradictory messages to different audiences is a concerning sign of potential untrustworthiness and sociopathic behavior.

    • Automated sociopathy in digital advertising leads to societal incoherenceDigital advertising algorithms prioritize content that generates clicks, often inflammatory or false, leading to societal confusion and division

      The mass infrastructure for automated sociopathy in digital advertising, as described in the discussion, leads to societal incoherence and an inability to agree on truths due to the constant delivery of contradictory messages. Micro targeting, or human targeting as it should be called, uses supercomputers and vast amounts of data to find the right brains to target and sell the bullets to the highest bidder. The algorithms prioritize content that generates more clicks, often inflammatory or fear-based, leading to a surge of false or harmful information. The example given was Facebook's attempt to automate trending topics, which resulted in the promotion of fake news articles. With millions of advertisers and trillions of ad combinations running daily, the machines lack the ability to determine if the ads are true, good, or helpful to society. This creates an unmanageable problem that can lead to societal confusion and division.

    • Understanding the limitations of AI in making moral decisionsAI can't understand context, morality, or societal impact, but creating trustworthy moral decision-making capabilities is crucial to ensure safe and ethical AI use in politics and other areas.

      While machines can process vast amounts of data and make decisions based on patterns, they lack the ability to understand context, morality, or societal impact. The push for automation and machine decision-making in areas like politics and children's health comes down to profitability, but if we deny machines the capacity to make critically important decisions, we limit their use. The challenge is to create trustworthy trending or moral decision-making capabilities for AI, which could serve as a "unit test" for its ability to approximate good human decision-making. The Cambridge Analytica scandal, which involved manipulating political campaigns in over 50 countries, illustrates the potential dangers of unchecked machine decision-making in politics. It's crucial to recognize these limitations and work towards creating safe and ethical AI systems.

    • Manipulating voter behavior through psychological operations researchPsychological operations research can create targeted campaigns that significantly influence voter behavior on a large scale by identifying people's motivations, cultural backgrounds, and levers of persuasion.

      Psychological operations (psyops) research, used by companies like Cambridge Analytica, can significantly influence voter behavior on a large scale. This research identifies people's motivations, cultural backgrounds, and levers of persuasion to create targeted campaigns. For instance, in Trinidad and Tobago, a youth movement called "Do So" was constructed to discourage voting, particularly among persuadable youth. The movement spread through memes, videos, graffiti, and demonstrations, leading to a significant decrease in youth voter turnout. However, this tactic had an unexpected consequence: the Indian youth, who were culturally inclined to vote with their families, still went to the polls, ensuring the winning party's victory. This example illustrates how psychological manipulation can spill out into the real world, leading to significant political consequences. Additionally, the SCL Group, Cambridge Analytica's parent company, has been involved in destabilizing governments and installing corrupt leaders in countries like Indonesia. The unifying marker of these manipulative movements is a precise, rapid message that quickly turns into protests, often spreading faster than organic movements.

    • The power of memes and organic movementsMemes and organic movements can lead to rapid mobilization and impact, but ethical considerations are crucial to prevent manipulation and respect individuals' values.

      The power of memes and organic movements can lead to rapid mobilization and widespread impact, but ethical considerations are crucial to ensure respect for individuals' values and prevent manipulation. The example of Trinidad and Tobago's crossing hands meme illustrates this, as it spread virally and inspired various forms of expression without external influence. However, the conversation around ethical persuasion raises concerns about the potential for persuaders to impose their values on the persuadees, leading to a loss of individual autonomy. This issue is particularly relevant in the context of social media platforms, where the lack of legal and regulatory frameworks and the absence of consistent enforcement of content standards create opportunities for abuse. The situation is further complicated by the fact that political figures, including those with a history of spreading disinformation, are not held to the same standards as individuals. As we look ahead to 2020, it is essential that technologists and policymakers address these challenges and find ways to promote ethical persuasion and protect individuals from harmful content.

    • Balancing Free Speech and Productive Political Discourse on Social MediaSocial media platforms must address concerns of misuse and manipulation in political messaging while ensuring equal opportunities for users, requiring significant changes in laws, regulations, education, and technology. AI reliance could lead to privacy concerns and manipulation, necessitating responsibility from platforms and individuals.

      Social media platforms are at a crossroads, with Facebook and Twitter currently allowing unfettered political messaging while the potential for misuse and manipulation is a significant concern. A potential solution suggested was the implementation of a "mass fairness doctrine," where each platform would provide equal speaking opportunities for users, ensuring political discourse remains productive and free from manipulation. However, this requires significant changes in laws, regulations, education, and technology. Another concerning issue is the increasing reliance on AI for communication and information, which could lead to privacy concerns and the potential for manipulation by those with malicious intentions. It's crucial for social media platforms to take responsibility for constructing the social world they've created and to introduce measures that promote fairness, transparency, and consent. Additionally, individuals must prioritize owning their data and receiving proper education to navigate the digital landscape effectively.

    • Manipulating messages with machine learningMachine learning can learn and mimic individual writing styles to create persuasive, targeted messages, potentially leading to socially divisive consequences.

      Machine learning technology, including style transfer for text, has the potential to create personalized messages that can manipulate individuals, leading to a socially divisive and unsustainable business model. This technology, which can learn and mimic an individual's writing style, can be used to create targeted messages that are uniquely persuasive to each person. This is akin to Iago in Shakespeare's Othello, who strategically gossips to manipulate people and create distrust. If left unchecked, this technology could divide society by spreading misinformation and making it impossible for individuals to compare notes and realize the artificial nature of the divide. Therefore, it is crucial to be aware of this technology and its potential dangers, and to advocate for transparency and ethical use of such advanced AI capabilities.

    Recent Episodes from Your Undivided Attention

    Why Are Migrants Becoming AI Test Subjects? With Petra Molnar

    Why Are Migrants Becoming AI Test Subjects? With Petra Molnar

    Climate change, political instability, hunger. These are just some of the forces behind an unprecedented refugee crisis that’s expected to include over a billion people by 2050. In response to this growing crisis, wealthy governments like the US and the EU are employing novel AI and surveillance technologies to slow the influx of migrants at their borders. But will this rollout stop at the border?

    In this episode, Tristan and Aza sit down with Petra Molnar to discuss how borders have become a proving ground for the sharpest edges of technology, and especially AI. Petra is an immigration lawyer and co-creator of the Migration and Technology Monitor. Her new book is “The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence.”

    RECOMMENDED MEDIA

    The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence

    Petra’s newly published book on the rollout of high risk tech at the border.

    Bots at the Gate

    A report co-authored by Petra about Canada’s use of AI technology in their immigration process.

    Technological Testing Grounds

    A report authored by Petra about the use of experimental technology in EU border enforcement.

    Startup Pitched Tasing Migrants from Drones, Video Reveals

    An article from The Intercept, containing the demo for Brinc’s taser drone pilot program.

    The UNHCR

    Information about the global refugee crisis from the UN.

    RECOMMENDED YUA EPISODES

    War is a Laboratory for AI with Paul Scharre

    No One is Immune to AI Harms with Dr. Joy Buolamwini

    Can We Govern AI? With Marietje Schaake

    CLARIFICATION:

    The iBorderCtrl project referenced in this episode was a pilot project that was discontinued in 2019

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    This week, a group of current and former employees from Open AI and Google Deepmind penned an open letter accusing the industry’s leading companies of prioritizing profits over safety. This comes after a spate of high profile departures from OpenAI, including co-founder Ilya Sutskever and senior researcher Jan Leike, as well as reports that OpenAI has gone to great lengths to silence would-be whistleblowers. 

    The writers of the open letter argue that researchers have a “right to warn” the public about AI risks and laid out a series of principles that would protect that right. In this episode, we sit down with one of those writers: William Saunders, who left his job as a research engineer at OpenAI in February. William is now breaking the silence on what he saw at OpenAI that compelled him to leave the company and to put his name to this letter. 

    RECOMMENDED MEDIA 

    The Right to Warn Open Letter 

    My Perspective On "A Right to Warn about Advanced Artificial Intelligence": A follow-up from William about the letter

     Leaked OpenAI documents reveal aggressive tactics toward former employees: An investigation by Vox into OpenAI’s policy of non-disparagement.

    RECOMMENDED YUA EPISODES

    1. A First Step Toward AI Regulation with Tom Wheeler 
    2. Spotlight on AI: What Would It Take For This to Go Well? 
    3. Big Food, Big Tech and Big AI with Michael Moss 
    4. Can We Govern AI? with Marietje Schaake

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    War is a Laboratory for AI with Paul Scharre

    War is a Laboratory for AI with Paul Scharre

    Right now, militaries around the globe are investing heavily in the use of AI weapons and drones.  From Ukraine to Gaza, weapons systems with increasing levels of autonomy are being used to kill people and destroy infrastructure and the development of fully autonomous weapons shows little signs of slowing down. What does this mean for the future of warfare? What safeguards can we put up around these systems? And is this runaway trend toward autonomous warfare inevitable or will nations come together and choose a different path? In this episode, Tristan and Daniel sit down with Paul Scharre to try to answer some of these questions. Paul is a former Army Ranger, the author of two books on autonomous weapons and he helped the Department of Defense write a lot of its policy on the use of AI in weaponry. 

    RECOMMENDED MEDIA

    Four Battlegrounds: Power in the Age of Artificial Intelligence: Paul’s book on the future of AI in war, which came out in 2023.

    Army of None: Autonomous Weapons and the Future of War: Paul’s 2018 book documenting and predicting the rise of autonomous and semi-autonomous weapons as part of modern warfare.

    The Perilous Coming Age of AI Warfare: How to Limit the Threat of Autonomous Warfare: Paul’s article in Foreign Affairs based on his recent trip to the battlefield in Ukraine.

    The night the world almost almost ended: A BBC documentary about Stanislav Petrov’s decision not to start nuclear war.

    AlphaDogfight Trials Final Event: The full simulated dogfight between an AI and human pilot. The AI pilot swept, 5-0.

    RECOMMENDED YUA EPISODES

    1. The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen Hao
    2. Can We Govern AI? with Marietje Schaake
    3. Big Food, Big Tech and Big AI with Michael Moss
    4. The Invisible Cyber-War with Nicole Perlroth

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    AI and Jobs: How to Make AI Work With Us, Not Against Us With Daron Acemoglu

    AI and Jobs: How to Make AI Work With Us, Not Against Us With Daron Acemoglu

    Tech companies say that AI will lead to massive economic productivity gains. But as we know from the first digital revolution, that’s not what happened. Can we do better this time around?

    RECOMMENDED MEDIA

    Power and Progress by Daron Acemoglu and Simon Johnson Professor Acemoglu co-authored a bold reinterpretation of economics and history that will fundamentally change how you see the world

    Can we Have Pro-Worker AI? Professor Acemoglu co-authored this paper about redirecting AI development onto the human-complementary path

    Rethinking Capitalism: In Conversation with Daron Acemoglu The Wheeler Institute for Business and Development hosted Professor Acemoglu to examine how technology affects the distribution and growth of resources while being shaped by economic and social incentives

    RECOMMENDED YUA EPISODES

    1. The Three Rules of Humane Tech
    2. The Tech We Need for 21st Century Democracy
    3. Can We Govern AI?
    4. An Alternative to Silicon Valley Unicorns

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Jonathan Haidt On How to Solve the Teen Mental Health Crisis

    Jonathan Haidt On How to Solve the Teen Mental Health Crisis

    Suicides. Self harm. Depression and anxiety. The toll of a social media-addicted, phone-based childhood has never been more stark. It can be easy for teens, parents and schools to feel like they’re trapped by it all. But in this conversation with Tristan Harris, author and social psychologist Jonathan Haidt makes the case that the conditions that led to today’s teenage mental health crisis can be turned around – with specific, achievable actions we all can take starting today.

    This episode was recorded live at the San Francisco Commonwealth Club.  

    Correction: Tristan mentions that 40 Attorneys General have filed a lawsuit against Meta for allegedly fostering addiction among children and teens through their products. However, the actual number is 42 Attorneys General who are taking legal action against Meta.

    Clarification: Jonathan refers to the Wait Until 8th pledge. By signing the pledge, a parent  promises not to give their child a smartphone until at least the end of 8th grade. The pledge becomes active once at least ten other families from their child’s grade pledge the same.

    Chips Are the Future of AI. They’re Also Incredibly Vulnerable. With Chris Miller

    Chips Are the Future of AI. They’re Also Incredibly Vulnerable. With Chris Miller

    Beneath the race to train and release more powerful AI models lies another race: a race by companies and nation-states to secure the hardware to make sure they win AI supremacy. 

    Correction: The latest available Nvidia chip is the Hopper H100 GPU, which has 80 billion transistors. Since the first commercially available chip had four transistors, the Hopper actually has 20 billion times that number. Nvidia recently announced the Blackwell, which boasts 208 billion transistors - but it won’t ship until later this year.

    RECOMMENDED MEDIA 

    Chip War: The Fight For the World’s Most Critical Technology by Chris Miller

    To make sense of the current state of politics, economics, and technology, we must first understand the vital role played by chips

    Gordon Moore Biography & Facts

    Gordon Moore, the Intel co-founder behind Moore's Law, passed away in March of 2023

    AI’s most popular chipmaker Nvidia is trying to use AI to design chips faster

    Nvidia's GPUs are in high demand - and the company is using AI to accelerate chip production

    RECOMMENDED YUA EPISODES

    Future-proofing Democracy In the Age of AI with Audrey Tang

    How Will AI Affect the 2024 Elections? with Renee DiResta and Carl Miller

    The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen Hao

    Protecting Our Freedom of Thought with Nita Farahany

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

     

     

    Future-proofing Democracy In the Age of AI with Audrey Tang

    Future-proofing Democracy In the Age of AI with Audrey Tang

    What does a functioning democracy look like in the age of artificial intelligence? Could AI even be used to help a democracy flourish? Just in time for election season, Taiwan’s Minister of Digital Affairs Audrey Tang returns to the podcast to discuss healthy information ecosystems, resilience to cyberattacks, how to “prebunk” deepfakes, and more. 

    RECOMMENDED MEDIA 

    Testing Theories of American Politics: Elites, Interest Groups, and Average Citizens by Martin Gilens and Benjamin I. Page

    This academic paper addresses tough questions for Americans: Who governs? Who really rules? 

    Recursive Public

    Recursive Public is an experiment in identifying areas of consensus and disagreement among the international AI community, policymakers, and the general public on key questions of governance

    A Strong Democracy is a Digital Democracy

    Audrey Tang’s 2019 op-ed for The New York Times

    The Frontiers of Digital Democracy

    Nathan Gardels interviews Audrey Tang in Noema

    RECOMMENDED YUA EPISODES 

    Digital Democracy is Within Reach with Audrey Tang

    The Tech We Need for 21st Century Democracy with Divya Siddarth

    How Will AI Affect the 2024 Elections? with Renee DiResta and Carl Miller

    The AI Dilemma

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    U.S. Senators Grilled Social Media CEOs. Will Anything Change?

    U.S. Senators Grilled Social Media CEOs. Will Anything Change?

    Was it political progress, or just political theater? The recent Senate hearing with social media CEOs led to astonishing moments — including Mark Zuckerberg’s public apology to families who lost children following social media abuse. Our panel of experts, including Facebook whistleblower Frances Haugen, untangles the explosive hearing, and offers a look ahead, as well. How will this hearing impact protocol within these social media companies? How will it impact legislation? In short: will anything change?

    Clarification: Julie says that shortly after the hearing, Meta’s stock price had the biggest increase of any company in the stock market’s history. It was the biggest one-day gain by any company in Wall Street history.

    Correction: Frances says it takes Snap three or four minutes to take down exploitative content. In Snap's most recent transparency report, they list six minutes as the median turnaround time to remove exploitative content.

    RECOMMENDED MEDIA 

    Get Media Savvy

    Founded by Julie Scelfo, Get Media Savvy is a non-profit initiative working to establish a healthy media environment for kids and families

    The Power of One by Frances Haugen

    The inside story of France’s quest to bring transparency and accountability to Big Tech

    RECOMMENDED YUA EPISODES

    Real Social Media Solutions, Now with Frances Haugen

    A Conversation with Facebook Whistleblower Frances Haugen

    Are the Kids Alright?

    Social Media Victims Lawyer Up with Laura Marquez-Garrett

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

     

     

    Taylor Swift is Not Alone: The Deepfake Nightmare Sweeping the Internet

    Taylor Swift is Not Alone: The Deepfake Nightmare Sweeping the Internet

    Over the past year, a tsunami of apps that digitally strip the clothes off real people has hit the market. Now anyone can create fake non-consensual sexual images in just a few clicks. With cases proliferating in high schools, guest presenter Laurie Segall talks to legal scholar Mary Anne Franks about the AI-enabled rise in deep fake porn and what we can do about it. 

    Correction: Laurie refers to the app 'Clothes Off.' It’s actually named Clothoff. There are many clothes remover apps in this category.

    RECOMMENDED MEDIA 

    Revenge Porn: The Cyberwar Against Women

    In a five-part digital series, Laurie Segall uncovers a disturbing internet trend: the rise of revenge porn

    The Cult of the Constitution

    In this provocative book, Mary Anne Franks examines the thin line between constitutional fidelity and constitutional fundamentalism

    Fake Explicit Taylor Swift Images Swamp Social Media

    Calls to protect women and crack down on the platforms and technology that spread such images have been reignited

    RECOMMENDED YUA EPISODES 

    No One is Immune to AI Harms

    Esther Perel on Artificial Intimacy

    Social Media Victims Lawyer Up

    The AI Dilemma

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Can Myth Teach Us Anything About the Race to Build Artificial General Intelligence? With Josh Schrei

    Can Myth Teach Us Anything About the Race to Build Artificial General Intelligence? With Josh Schrei

    We usually talk about tech in terms of economics or policy, but the casual language tech leaders often use to describe AI — summoning an inanimate force with the powers of code — sounds more... magical. So, what can myth and magic teach us about the AI race? Josh Schrei, mythologist and host of The Emerald podcast,  says that foundational cultural tales like "The Sorcerer's Apprentice" or Prometheus teach us the importance of initiation, responsibility, human knowledge, and care.  He argues these stories and myths can guide ethical tech development by reminding us what it is to be human. 

    Correction: Josh says the first telling of "The Sorcerer’s Apprentice" myth dates back to ancient Egypt, but it actually dates back to ancient Greece.

    RECOMMENDED MEDIA 

    The Emerald podcast

    The Emerald explores the human experience through a vibrant lens of myth, story, and imagination

    Embodied Ethics in The Age of AI

    A five-part course with The Emerald podcast’s Josh Schrei and School of Wise Innovation’s Andrew Dunn

    Nature Nurture: Children Can Become Stewards of Our Delicate Planet

    A U.S. Department of the Interior study found that the average American kid can identify hundreds of corporate logos but not plants and animals

    The New Fire

    AI is revolutionizing the world - here's how democracies can come out on top. This upcoming book was authored by an architect of President Biden's AI executive order

    RECOMMENDED YUA EPISODES 

    How Will AI Affect the 2024 Elections?

    The AI Dilemma

    The Three Rules of Humane Tech

    AI Myths and Misconceptions

     

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Related Episodes

    A Conversation with Facebook Whistleblower Frances Haugen

    A Conversation with Facebook Whistleblower Frances Haugen

    We are now in social media's Big Tobacco moment. And that’s largely thanks to the courage of one woman: Frances Haugen.

    Frances is a specialist in algorithmic product management. She worked at Google, Pinterest, and Yelp before joining Facebook — first as a Product Manager on Civic Misinformation, and then on the Counter-Espionage team. But what she saw at Facebook was that the company consistently and knowingly prioritized profits over public safety. So Frances made the courageous decision to blow the whistle — which resulted in the biggest disclosure in the history of Facebook, and in the history of social media.

    In this special interview, co-hosts Tristan and Aza go behind the headlines with Frances herself. We go deeper into the problems she exposed, discuss potential solutions, and explore her motivations — along with why she fundamentally believes change is possible. We also announce an exciting campaign being launched by the Center for Humane Technology — to use this window of opportunity to make Facebook safer.

    Stranger than Fiction — with Claire Wardle

    Stranger than Fiction — with Claire Wardle

    How can tech companies help flatten the curve? First and foremost, they must address the lethal misinformation and disinformation circulating on their platforms. The problem goes much deeper than fake news, according to Claire Wardle, co-founder and executive director of First Draft. She studies the gray zones of information warfare, where bad actors mix facts with falsehoods, news with gossip, and sincerity with satire. “Most of this stuff isn't fake and most of this stuff isn't news,” Claire argues. If these subtler forms of misinformation go unaddressed, tech companies may not only fail to flatten the curve — they could raise it higher. 

    163. Rhino Borked Guy

    163. Rhino Borked Guy

    Provoked by current events, we've got three political eponyms for turmoiled times. Get ready for explosives, presidential pigs, Supreme Court scrapping, and wronged rhinos.

    Content note: there is some description of torture about halfway through the episode.

    Find out more about this episode and get extra information about the topics therein at theallusionist.org/rhino, where there's also a transcript.

    The Allusionist's online home is theallusionist.org. Stay in touch at twitter.com/allusionistshow, facebook.com/allusionistshow and instagram.com/allusionistshow.

    The Allusionist is produced by me, Helen Zaltzman. The music is by Martin Austwick. Hear Martin’s own songs via palebirdmusic.com.

    Support the show: http://patreon.com/allusionist

    See omnystudio.com/listener for privacy information.

    Disinformation Then and Now — with Camille François

    Disinformation Then and Now — with Camille François

    Disinformation researchers have been fighting two battles over the last decade: one to combat and contain harmful information, and one to convince the world that these manipulations have an offline impact that requires complex, nuanced solutions. Camille François, Chief Information Officer at the cybersecurity company Graphika and an affiliate of the Harvard Berkman Klein Center for Internet & Society, believes that our common understanding of the problem has recently reached a new level. In this interview, she catalogues the key changes she observed between studying Russian interference in the 2016 U.S. election and helping convene and operate the Election Integrity Partnership watchdog group before, during and after the 2020 election. “I'm optimistic, because I think that things that have taken quite a long time to land are finally landing, and because I think that we do have a diverse set of expertise at the table,” she says. Camille and Tristan Harris dissect the challenges and talk about the path forward to a healthy information ecosystem.

    Mr. Harris Goes to Washington

    Mr. Harris Goes to Washington

    What difference does a few hours of Congressional testimony make? Tristan takes us behind the scenes of his January 8th testimony to the Energy and Commerce Committee on disinformation in the digital age. With just minutes to answer each lawmaker’s questions, he speaks with Committee members about how the urgency and complexity of humane technology issues is an immense challenge. Tristan returned hopeful, and though it sometimes feels like Groundhog Day, each trip to DC reveals evolving conversations, advancing legislation, deeper understanding and stronger coalitions.