Logo
    Search

    Podcast Summary

    • Impact of Disinformation on Elections and ActivismDisinformation on social media can negatively impact elections and activism, leading to increased support for far-right candidates and hindering the ability to organize for various causes such as climate change, human rights, and vaccinations. Addressing disinformation is crucial for creating a safer and more informed online environment.

      Disinformation on social media can significantly impact elections and activism, as seen in the Brazilian elections in 2018. Activists, like Fadi Khuran from Avaz, have warned about the detrimental effects of disinformation on their ability to organize for various causes, including climate change, human rights, and vaccinations. Despite the majority of Brazilians expressing their unwillingness to vote for a far-right candidate, Bolsonaro, the social media environment became increasingly toxic, leading more people to support him. The reach of disinformation and its harms are global, and while some progress has been made in addressing these issues, there is still a need for more action from those with the power to reduce harm and ensure safety online. The interview with Fadi provides valuable insights into the impact of social media on activism and the importance of addressing disinformation to create a better world.

    • The Dangerous Consequences of False Information on Social MediaFalse information on social media can incite violence and radicalize individuals, leading to dire consequences for both individuals and humanity as a whole.

      Social media, particularly on platforms like Facebook, can have dangerous consequences when used to spread false information and incite violence. This was evident in various cases discussed, including the Brazilian elections where 89% of Bolsonaro voters believed one of the top ten false stories about his opponent. This issue is not limited to specific communities or regions, as it can also affect individuals who become radicalized or socially isolated. The consequences of this can be dire, not only for individuals but also for humanity as a whole, making it difficult to find intelligent solutions to complex issues such as climate change. The existential threat comes from both the direct harm caused to individuals and the potential for radicalization, which can lead to decisions that threaten the future of humanity, such as the destruction of the Amazon rainforest. It's crucial that we understand the impact of social media on information dissemination and take steps to mitigate the spread of false information and promote factual, nuanced discussions.

    • Malicious use of social media in Brazil's electionsSocial media disinformation campaigns, reaching over 12 million views, influenced Brazil's elections by shifting public opinion and contributing to Bolsonaro's win. Strategic use of multiple platforms allowed for the spread of false narratives, leading to harmful policies.

      Social media platforms, when used maliciously, can significantly influence elections and shape public opinion. In the case of Brazil, malicious actors used coordinated disinformation campaigns on multiple platforms, including Facebook, YouTube, and WhatsApp, reaching over 12 million views. These false narratives, believed by 89% of Bolsonaro voters in a survey, helped shift the election narrative and contributed to his win. The actors used different platforms strategically, testing and spreading disinformation on Twitter, saturating Facebook groups, and using WhatsApp for mass spamming. This vicious loop allowed for the rise of a far-right leader like Bolsonaro, who then implemented policies harmful to the environment, women's rights, and indigenous communities. The vulnerability to such disinformation campaigns is not limited to certain regions; all societies are at risk, especially as younger generations increasingly rely on social media for news. It's crucial to address this issue collectively and rebuild trust in institutions and media to counteract the negative impacts of toxic social media.

    • Brazil's digital dark age: Misinformation and lack of trust in media sourcesThe spread of fake news and radicalization through social media in Brazil can lead to violence and a breakdown of trust in society, exacerbated by addictive platforms and recommendation algorithms. Addressing this issue requires promoting media literacy, fact-checking, and trust in reliable sources of information.

      The proliferation of misinformation and lack of trust in traditional media sources in Brazil, particularly in rural areas, has led to a dangerous situation where people turn to social media for news. This can result in the spread of fake news and radicalization, which can have serious consequences, including violence and a breakdown of trust in society as a whole. The addictive nature of social media platforms and their recommendation algorithms can further exacerbate this problem, leading people to engage with more and more extreme content. This issue is not unique to Brazil and is becoming increasingly prevalent in other countries, including the US. The consequences of this digital dark age can be far-reaching, from the inability to have sophisticated conversations and take powerful positive actions, to the spread of diseases and other real-world harms. It is crucial that we address this issue and find ways to promote media literacy, fact-checking, and trust in reliable sources of information.

    • The Complexity of Addressing Misinformation and Hate OnlineSocial media's impact on bullying and violence is more complex than assumed, with victims often targeted both online and offline, and communities without technology being vulnerable. Tech companies need to bridge the gap between their understanding and the reality to create a safer online space for all.

      While social media platforms like Facebook and Twitter have teams to address misinformation and hateful content, the issue is much more complex. Communities without access to technology or social media are often targeted online and cannot respond or report the harmful content. This creates a cycle of bullying and violence, with victims being attacked both online and offline. The assumption that social media provides a level playing field for free expression is misleading, as the privilege of being online is skewed towards bullies. The environment is increasingly favoring authoritarian powers through various means, including fake accounts and political advertising. When presenting these issues to tech companies, there's a disconnect between their understanding of their platforms' impact and the reality. However, it's important to remember that most employees at these companies are not inherently bad. The goal is to bridge this gap and work towards creating a more equitable and safe online space for all.

    • Stories of harassment and harm caused by disinformationCompanies need to prioritize correcting records, detoxing algorithms, and creating humane interfaces to combat disinformation and heal the world through human connection and authentic communication.

      Human connection and the healing power of authentic communication are under attack due to disinformation and toxic online environments. The victims of disinformation, including Lenny Pozner from Sandy Hook, Trevor Noah, Jessica Aro, and Ethan Lindenberger, shared their stories of harassment and harm caused by false information spread on social media platforms. These stories moved executives, designers, and engineers at companies like Twitter, Facebook, Google, and YouTube, leading to more stringent actions against disinformation. However, the decisions to address these issues are not being made by all employees, and executives are often hesitant due to fear of political repercussions or antitrust action. To combat this, it's essential to bring these stories to the right people, including CEOs and policymakers, and encourage companies to prioritize correcting records, detoxing algorithms, and creating more humane interfaces. By doing so, we can work towards healing the world through human connection and authentic communication.

    • Bringing those affected by disinformation closer to decision-making processTech companies are committing to correcting disinformation and detoxifying algorithms to prioritize human rights and stop recommending harmful content, aiming to decrease belief in misinformation and positively impact society.

      Tech companies need to bring people most affected by disinformation and hate on their platforms closer to the decision-making process. This will help close the loop between decisions made and their far-reaching impacts. During recent meetings, progress was made, with promises from platforms like Facebook, YouTube, and Twitter to roll out stronger solutions. However, more action is needed, especially with the US 2020 elections approaching. The key asks from these meetings were centered around two goals: correcting the record and detoxifying the algorithm. The former involves alerting users who have been exposed to disinformation and providing them with well-designed corrections. The latter is about redesigning algorithms to prioritize human rights and stop recommending harmful content. These measures could significantly decrease the belief in misinformation and make a positive impact on society.

    • Provide factual alternatives and repeat corrections to counteract misinformationPlatforms must provide accurate information and repeat corrections to combat misinformation's impact on users' memory, tailored to the audience to avoid backfiring.

      While platforms like YouTube have a responsibility to protect users from harmful content, the solution goes beyond just removing the content or stopping recommendations. Instead, it's crucial to provide factual alternatives and repeat corrections more often than the misinformation to counteract the impact on users' automatic memory. Additionally, corrections should be tailored to the audience to avoid backfiring. Ultimately, the challenge lies in understanding human behavior and designing systems that foster good and decent interactions, rather than allowing unregulated systems to be dominated by bad actors. The power of human connection and decency is essential, but it requires intentional design to overcome the negative effects of unchecked social media.

    • Consequences of Social Media on Society and Next GenerationSocial media's impact on society and the next generation includes decreased critical thinking, polarization, extremism, and a degradation of democracy. Tech companies' business models contribute to false consciousness. We need collective action to reverse this trend and create a future where children can develop sovereign minds and intergenerational wisdom.

      The current state of social media is having profound and unpredictable consequences on our society, particularly on the next generation. These consequences include decreased critical thinking skills, polarization, extremism, and a degradation of democracy. The technology companies, through their business models, have contributed to this false consciousness that is pervasive among users. To reverse this trend, we need a collective effort to name and reverse the false consciousness created by social media. This could involve notifications to users about the false realities they have been exposed to, and a commitment to creating a future where children are free to develop sovereign minds and intergenerational wisdom. It's on all of us to fight back against this and create a better future for ourselves and future generations.

    • Protecting democratic processes from disinformationPolicymakers must demand accountability from platforms to protect democratic processes from disinformation, starting with the upcoming US elections.

      Transforming the solution to mass healing from the issue of disinformation and online harm requires a multi-faceted approach. This includes smart, deliberative regulations that allow for transparency and accountability for platforms, as well as a collective vision towards truth and reconciliation. The upcoming US elections present a pivotal moment where the democratic process can be protected from disinformation and interference. Policymakers and platform executives hold significant power in this fight, and their voices and actions are crucial in defending the fabric of democratic societies. The scale of the disinformation problem is immense, with millions of views on fake news stories, surpassing the reach of the registered voting population in the US. To create a future where people go to the polls armed with facts, it is essential for policymakers to begin speaking up and demanding accountability from the platforms. The EU is moving in the direction of regulating disinformation seriously, and the US needs to follow suit. The stakes are high, and the time for action is now.

    • Protecting Humanity from Tech-Driven DegradationPolitical will and motivation from a few key players, particularly the US government, can address the issue of human degradation through technology. Raising awareness and supporting campaigns can help bring about change.

      The issue of human degradation through technology, specifically the extractive business model of automating and manipulating human attention with AI at scale on major platforms, can feel overwhelming and hopeless due to the large number of companies and governments involved. However, unlike climate change, this problem is more tractable as it only requires political will and motivation from a handful of people and governments to make significant changes. The US, being the location of many tech companies and having the most influence, could lead the way in implementing policies to correct the issue and protect humanity. The future of our civilization depends on acknowledging that everyone loses if we continue down this path, and it's not a matter of one political side winning and the other losing. Instead, it's about ensuring the viability of human civilization. To make this change happen, it's crucial to raise awareness and support organizations and campaigns working towards this goal. Each one of us has a role to play in making a difference.

    • The Difference Between Freedom of Speech and Freedom of ReachRecognizing the distinction between freedom of speech and freedom of reach in technology is crucial for creating a future where tech enhances humanity rather than detracts from it. The Center For Humane Technology's podcast, 'Your Undivided Attention,' is working to shift the narrative in a positive direction with insights from Fadi Quran and team.

      Key takeaway from this podcast episode is the importance of recognizing the difference between freedom of speech and freedom of reach in the context of technology. Fadi Quran and the team at the Center For Human Technology have been instrumental in bringing this issue to light and encouraging a conversation around how technology can enhance humanity rather than detract from it. The team's insights and key talking points have helped shift the narrative in a positive direction. It's crucial that we continue working together to address this problem and create a future where technology serves as an addition to humanity, not a subtraction. The Center For Humane Technology's podcast, "Your Undivided Attention," is made possible by the support of generous lead supporters, including the Omidyar Network, the Gerald Schwartz and Heather Reisman Foundation, the Patrick J McGovern Foundation, Evolve Foundation, Craig Newmark Philanthropies, and Knight Foundation.

    Recent Episodes from Your Undivided Attention

    Why Are Migrants Becoming AI Test Subjects? With Petra Molnar

    Why Are Migrants Becoming AI Test Subjects? With Petra Molnar

    Climate change, political instability, hunger. These are just some of the forces behind an unprecedented refugee crisis that’s expected to include over a billion people by 2050. In response to this growing crisis, wealthy governments like the US and the EU are employing novel AI and surveillance technologies to slow the influx of migrants at their borders. But will this rollout stop at the border?

    In this episode, Tristan and Aza sit down with Petra Molnar to discuss how borders have become a proving ground for the sharpest edges of technology, and especially AI. Petra is an immigration lawyer and co-creator of the Migration and Technology Monitor. Her new book is “The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence.”

    RECOMMENDED MEDIA

    The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence

    Petra’s newly published book on the rollout of high risk tech at the border.

    Bots at the Gate

    A report co-authored by Petra about Canada’s use of AI technology in their immigration process.

    Technological Testing Grounds

    A report authored by Petra about the use of experimental technology in EU border enforcement.

    Startup Pitched Tasing Migrants from Drones, Video Reveals

    An article from The Intercept, containing the demo for Brinc’s taser drone pilot program.

    The UNHCR

    Information about the global refugee crisis from the UN.

    RECOMMENDED YUA EPISODES

    War is a Laboratory for AI with Paul Scharre

    No One is Immune to AI Harms with Dr. Joy Buolamwini

    Can We Govern AI? With Marietje Schaake

    CLARIFICATION:

    The iBorderCtrl project referenced in this episode was a pilot project that was discontinued in 2019

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    This week, a group of current and former employees from Open AI and Google Deepmind penned an open letter accusing the industry’s leading companies of prioritizing profits over safety. This comes after a spate of high profile departures from OpenAI, including co-founder Ilya Sutskever and senior researcher Jan Leike, as well as reports that OpenAI has gone to great lengths to silence would-be whistleblowers. 

    The writers of the open letter argue that researchers have a “right to warn” the public about AI risks and laid out a series of principles that would protect that right. In this episode, we sit down with one of those writers: William Saunders, who left his job as a research engineer at OpenAI in February. William is now breaking the silence on what he saw at OpenAI that compelled him to leave the company and to put his name to this letter. 

    RECOMMENDED MEDIA 

    The Right to Warn Open Letter 

    My Perspective On "A Right to Warn about Advanced Artificial Intelligence": A follow-up from William about the letter

     Leaked OpenAI documents reveal aggressive tactics toward former employees: An investigation by Vox into OpenAI’s policy of non-disparagement.

    RECOMMENDED YUA EPISODES

    1. A First Step Toward AI Regulation with Tom Wheeler 
    2. Spotlight on AI: What Would It Take For This to Go Well? 
    3. Big Food, Big Tech and Big AI with Michael Moss 
    4. Can We Govern AI? with Marietje Schaake

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    War is a Laboratory for AI with Paul Scharre

    War is a Laboratory for AI with Paul Scharre

    Right now, militaries around the globe are investing heavily in the use of AI weapons and drones.  From Ukraine to Gaza, weapons systems with increasing levels of autonomy are being used to kill people and destroy infrastructure and the development of fully autonomous weapons shows little signs of slowing down. What does this mean for the future of warfare? What safeguards can we put up around these systems? And is this runaway trend toward autonomous warfare inevitable or will nations come together and choose a different path? In this episode, Tristan and Daniel sit down with Paul Scharre to try to answer some of these questions. Paul is a former Army Ranger, the author of two books on autonomous weapons and he helped the Department of Defense write a lot of its policy on the use of AI in weaponry. 

    RECOMMENDED MEDIA

    Four Battlegrounds: Power in the Age of Artificial Intelligence: Paul’s book on the future of AI in war, which came out in 2023.

    Army of None: Autonomous Weapons and the Future of War: Paul’s 2018 book documenting and predicting the rise of autonomous and semi-autonomous weapons as part of modern warfare.

    The Perilous Coming Age of AI Warfare: How to Limit the Threat of Autonomous Warfare: Paul’s article in Foreign Affairs based on his recent trip to the battlefield in Ukraine.

    The night the world almost almost ended: A BBC documentary about Stanislav Petrov’s decision not to start nuclear war.

    AlphaDogfight Trials Final Event: The full simulated dogfight between an AI and human pilot. The AI pilot swept, 5-0.

    RECOMMENDED YUA EPISODES

    1. The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen Hao
    2. Can We Govern AI? with Marietje Schaake
    3. Big Food, Big Tech and Big AI with Michael Moss
    4. The Invisible Cyber-War with Nicole Perlroth

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    AI and Jobs: How to Make AI Work With Us, Not Against Us With Daron Acemoglu

    AI and Jobs: How to Make AI Work With Us, Not Against Us With Daron Acemoglu

    Tech companies say that AI will lead to massive economic productivity gains. But as we know from the first digital revolution, that’s not what happened. Can we do better this time around?

    RECOMMENDED MEDIA

    Power and Progress by Daron Acemoglu and Simon Johnson Professor Acemoglu co-authored a bold reinterpretation of economics and history that will fundamentally change how you see the world

    Can we Have Pro-Worker AI? Professor Acemoglu co-authored this paper about redirecting AI development onto the human-complementary path

    Rethinking Capitalism: In Conversation with Daron Acemoglu The Wheeler Institute for Business and Development hosted Professor Acemoglu to examine how technology affects the distribution and growth of resources while being shaped by economic and social incentives

    RECOMMENDED YUA EPISODES

    1. The Three Rules of Humane Tech
    2. The Tech We Need for 21st Century Democracy
    3. Can We Govern AI?
    4. An Alternative to Silicon Valley Unicorns

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Jonathan Haidt On How to Solve the Teen Mental Health Crisis

    Jonathan Haidt On How to Solve the Teen Mental Health Crisis

    Suicides. Self harm. Depression and anxiety. The toll of a social media-addicted, phone-based childhood has never been more stark. It can be easy for teens, parents and schools to feel like they’re trapped by it all. But in this conversation with Tristan Harris, author and social psychologist Jonathan Haidt makes the case that the conditions that led to today’s teenage mental health crisis can be turned around – with specific, achievable actions we all can take starting today.

    This episode was recorded live at the San Francisco Commonwealth Club.  

    Correction: Tristan mentions that 40 Attorneys General have filed a lawsuit against Meta for allegedly fostering addiction among children and teens through their products. However, the actual number is 42 Attorneys General who are taking legal action against Meta.

    Clarification: Jonathan refers to the Wait Until 8th pledge. By signing the pledge, a parent  promises not to give their child a smartphone until at least the end of 8th grade. The pledge becomes active once at least ten other families from their child’s grade pledge the same.

    Chips Are the Future of AI. They’re Also Incredibly Vulnerable. With Chris Miller

    Chips Are the Future of AI. They’re Also Incredibly Vulnerable. With Chris Miller

    Beneath the race to train and release more powerful AI models lies another race: a race by companies and nation-states to secure the hardware to make sure they win AI supremacy. 

    Correction: The latest available Nvidia chip is the Hopper H100 GPU, which has 80 billion transistors. Since the first commercially available chip had four transistors, the Hopper actually has 20 billion times that number. Nvidia recently announced the Blackwell, which boasts 208 billion transistors - but it won’t ship until later this year.

    RECOMMENDED MEDIA 

    Chip War: The Fight For the World’s Most Critical Technology by Chris Miller

    To make sense of the current state of politics, economics, and technology, we must first understand the vital role played by chips

    Gordon Moore Biography & Facts

    Gordon Moore, the Intel co-founder behind Moore's Law, passed away in March of 2023

    AI’s most popular chipmaker Nvidia is trying to use AI to design chips faster

    Nvidia's GPUs are in high demand - and the company is using AI to accelerate chip production

    RECOMMENDED YUA EPISODES

    Future-proofing Democracy In the Age of AI with Audrey Tang

    How Will AI Affect the 2024 Elections? with Renee DiResta and Carl Miller

    The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen Hao

    Protecting Our Freedom of Thought with Nita Farahany

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

     

     

    Future-proofing Democracy In the Age of AI with Audrey Tang

    Future-proofing Democracy In the Age of AI with Audrey Tang

    What does a functioning democracy look like in the age of artificial intelligence? Could AI even be used to help a democracy flourish? Just in time for election season, Taiwan’s Minister of Digital Affairs Audrey Tang returns to the podcast to discuss healthy information ecosystems, resilience to cyberattacks, how to “prebunk” deepfakes, and more. 

    RECOMMENDED MEDIA 

    Testing Theories of American Politics: Elites, Interest Groups, and Average Citizens by Martin Gilens and Benjamin I. Page

    This academic paper addresses tough questions for Americans: Who governs? Who really rules? 

    Recursive Public

    Recursive Public is an experiment in identifying areas of consensus and disagreement among the international AI community, policymakers, and the general public on key questions of governance

    A Strong Democracy is a Digital Democracy

    Audrey Tang’s 2019 op-ed for The New York Times

    The Frontiers of Digital Democracy

    Nathan Gardels interviews Audrey Tang in Noema

    RECOMMENDED YUA EPISODES 

    Digital Democracy is Within Reach with Audrey Tang

    The Tech We Need for 21st Century Democracy with Divya Siddarth

    How Will AI Affect the 2024 Elections? with Renee DiResta and Carl Miller

    The AI Dilemma

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    U.S. Senators Grilled Social Media CEOs. Will Anything Change?

    U.S. Senators Grilled Social Media CEOs. Will Anything Change?

    Was it political progress, or just political theater? The recent Senate hearing with social media CEOs led to astonishing moments — including Mark Zuckerberg’s public apology to families who lost children following social media abuse. Our panel of experts, including Facebook whistleblower Frances Haugen, untangles the explosive hearing, and offers a look ahead, as well. How will this hearing impact protocol within these social media companies? How will it impact legislation? In short: will anything change?

    Clarification: Julie says that shortly after the hearing, Meta’s stock price had the biggest increase of any company in the stock market’s history. It was the biggest one-day gain by any company in Wall Street history.

    Correction: Frances says it takes Snap three or four minutes to take down exploitative content. In Snap's most recent transparency report, they list six minutes as the median turnaround time to remove exploitative content.

    RECOMMENDED MEDIA 

    Get Media Savvy

    Founded by Julie Scelfo, Get Media Savvy is a non-profit initiative working to establish a healthy media environment for kids and families

    The Power of One by Frances Haugen

    The inside story of France’s quest to bring transparency and accountability to Big Tech

    RECOMMENDED YUA EPISODES

    Real Social Media Solutions, Now with Frances Haugen

    A Conversation with Facebook Whistleblower Frances Haugen

    Are the Kids Alright?

    Social Media Victims Lawyer Up with Laura Marquez-Garrett

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

     

     

    Taylor Swift is Not Alone: The Deepfake Nightmare Sweeping the Internet

    Taylor Swift is Not Alone: The Deepfake Nightmare Sweeping the Internet

    Over the past year, a tsunami of apps that digitally strip the clothes off real people has hit the market. Now anyone can create fake non-consensual sexual images in just a few clicks. With cases proliferating in high schools, guest presenter Laurie Segall talks to legal scholar Mary Anne Franks about the AI-enabled rise in deep fake porn and what we can do about it. 

    Correction: Laurie refers to the app 'Clothes Off.' It’s actually named Clothoff. There are many clothes remover apps in this category.

    RECOMMENDED MEDIA 

    Revenge Porn: The Cyberwar Against Women

    In a five-part digital series, Laurie Segall uncovers a disturbing internet trend: the rise of revenge porn

    The Cult of the Constitution

    In this provocative book, Mary Anne Franks examines the thin line between constitutional fidelity and constitutional fundamentalism

    Fake Explicit Taylor Swift Images Swamp Social Media

    Calls to protect women and crack down on the platforms and technology that spread such images have been reignited

    RECOMMENDED YUA EPISODES 

    No One is Immune to AI Harms

    Esther Perel on Artificial Intimacy

    Social Media Victims Lawyer Up

    The AI Dilemma

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Can Myth Teach Us Anything About the Race to Build Artificial General Intelligence? With Josh Schrei

    Can Myth Teach Us Anything About the Race to Build Artificial General Intelligence? With Josh Schrei

    We usually talk about tech in terms of economics or policy, but the casual language tech leaders often use to describe AI — summoning an inanimate force with the powers of code — sounds more... magical. So, what can myth and magic teach us about the AI race? Josh Schrei, mythologist and host of The Emerald podcast,  says that foundational cultural tales like "The Sorcerer's Apprentice" or Prometheus teach us the importance of initiation, responsibility, human knowledge, and care.  He argues these stories and myths can guide ethical tech development by reminding us what it is to be human. 

    Correction: Josh says the first telling of "The Sorcerer’s Apprentice" myth dates back to ancient Egypt, but it actually dates back to ancient Greece.

    RECOMMENDED MEDIA 

    The Emerald podcast

    The Emerald explores the human experience through a vibrant lens of myth, story, and imagination

    Embodied Ethics in The Age of AI

    A five-part course with The Emerald podcast’s Josh Schrei and School of Wise Innovation’s Andrew Dunn

    Nature Nurture: Children Can Become Stewards of Our Delicate Planet

    A U.S. Department of the Interior study found that the average American kid can identify hundreds of corporate logos but not plants and animals

    The New Fire

    AI is revolutionizing the world - here's how democracies can come out on top. This upcoming book was authored by an architect of President Biden's AI executive order

    RECOMMENDED YUA EPISODES 

    How Will AI Affect the 2024 Elections?

    The AI Dilemma

    The Three Rules of Humane Tech

    AI Myths and Misconceptions

     

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Related Episodes

    China’s Response to the Israel-Hamas War

    China’s Response to the Israel-Hamas War

    On the morning of October 07, 2023, Hamas launched an unprovoked attack from the Gaza Strip, indiscriminately killing more than 1,400 Israeli and foreign nationals. Over 200 civilians, including women and children, were taken to Gaza as hostages. IN response to this attack, as well as subsequent attacks launched from Lebanon and Syria, Israel began an unprecedented bombing campaign of Gaza and targeted Hezbollah and Syrian government military positions. The conflict is unlikely to end soon and may spread.

    While the conflict itself demands global attention, the focus of this podcast is Chinese foreign and security policy. This discussion focuses on China’s response to the war, China’s relations with Palestine and Israel, and the actions that Beijing might take in the coming weeks and months that could help defuse the conflict or cause it to worsen.

    To date, China has not condemned Hamas. Instead, it has criticized what it calls Israel’s disproportionate military response and the “collective punishment of the Gazan people.” Moreover, it has trumpeted its position as an unbiased potential mediator and called for a ceasefire and the implementation of a two-state solution.

    Host Bonnie Glaser is joined by Tuvia Gering, who, like many Israelis, has been activated to defend his country. Gering is a leading expert on China and its relations with the Middle East. In his civilian capacity, he is a researcher at the Diane & Guilford Glazer Foundation’s Israel-China Policy Center at the Institute for National Security Studies in Tel Aviv and a nonresident fellow for the Atlantic Council’s Global China Hub.

     

    Timestamps

    [02:25] China’s Past Relationships with Israel and Palestine

    [03:43] Reaction to the Chinese Response 

    [05:06] China’s Interests in Supporting Palestine

    [09:06] China’s Reaction to the Death of Chinese Citizens

    [10:55] Benefits of a Wider Conflict for China 

    [15:02] Comparisons to the War in Ukraine

    [17:54] China as a Mediator for the War

    [20:55] Antisemitism in Chinese Society

    [25:35] Outcome of the War for China

    Episode 40 The Sinister Reason Local Newspapers Are Disappearing, NFL Quarterback Rankings, & The Loveland Frog Man

    Episode 40 The Sinister Reason Local Newspapers Are Disappearing, NFL Quarterback Rankings, & The Loveland Frog Man
    Topics discussed on this episode include going to the cardiologist, budgeting, President Biden's Ministry of Truth, a possible coup in Russia, CNN+ mistakenly sending gift baskets to employees they laid off, the sinister reason local newspapers are disappearing, incendiary comments made by Nick Saban, the NBA playoffs, NFL quarterback rankings, the senate hearing on UFOs, why whale sharks keep running into ships, whether or not demons are real, and the incredible story of the Loveland Frog Man.

    163. Rhino Borked Guy

    163. Rhino Borked Guy

    Provoked by current events, we've got three political eponyms for turmoiled times. Get ready for explosives, presidential pigs, Supreme Court scrapping, and wronged rhinos.

    Content note: there is some description of torture about halfway through the episode.

    Find out more about this episode and get extra information about the topics therein at theallusionist.org/rhino, where there's also a transcript.

    The Allusionist's online home is theallusionist.org. Stay in touch at twitter.com/allusionistshow, facebook.com/allusionistshow and instagram.com/allusionistshow.

    The Allusionist is produced by me, Helen Zaltzman. The music is by Martin Austwick. Hear Martin’s own songs via palebirdmusic.com.

    Support the show: http://patreon.com/allusionist

    See omnystudio.com/listener for privacy information.

    Human Rights, Social Media, and Myanmar, with Ray Serrato

    Human Rights, Social Media, and Myanmar, with Ray Serrato

    Ray Serrato, Social Media Analyst at the United Nations Commission on Human Rights, discusses how social media data is used in the context of human rights violations. Ray breaks down the attacks against the Rohingya minority in Myanmar, and we discuss the role of social media in these attacks. Lastly, we talk about what the closing down of social media APIs means for future human rights work.