Logo
    Search

    Podcast Summary

    • The history of attention merchants and their tacticsUnderstanding the historical context of attention as a commodity can help navigate modern media and technology platforms, highlighting the importance of antitrust laws in the new Gilded Age.

      The race to capture and monetize human attention, as explored in Tim Wu's "The Attention Merchants," has a long history. This can be traced back to the Penny Press era, where newspapers began selling themselves as a commodity by attracting larger audiences and selling them to advertisers. This led to a competition to produce increasingly sensational and outrageous content. Wu's research highlights that while such attention-grabbing tactics have persisted throughout history, there are moments when the public demands a change of course. The current coronavirus pandemic presents just such a crossroads. The theme of Wu's second book, "The Curse of Bigness," emphasizes the importance of antitrust laws in the new Gilded Age to prevent the dominance of a few attention merchants. In essence, understanding the historical context of attention as a commodity can help us navigate the complexities of modern media and technology platforms.

    • The race for attention in journalism led to sensationalism and negative consequences, but ethical guidelines have helped the industry recover.The history of journalism shows that sensationalism and unethical practices can lead to negative consequences, but ethical guidelines help ensure truthful reporting.

      The race for attention in journalism, which began with the discovery of a new business model in the late 19th century, led to a "yellow journalism" era where facts took a backseat to sensationalism. This race to the bottom eventually resulted in negative consequences, such as interference in politics and the spread of false information. However, history shows that journalism has a tendency to recover through the development of ethical guidelines. The swing back from sensationalism to truthful reporting often occurs when the stakes are high and the negative consequences become apparent. An example of this can be seen in the quiz show scandals of the 1950s, where it was discovered that the outcomes of these popular television shows were predetermined. The public's reaction to this revelation led to the establishment of journalistic ethics, which continue to guide the industry today.

    • Media scandals and the need for regulationMedia history shows a pattern of initial optimism, followed by disillusionment and calls for regulation or ethical standards in response to potential misuse and manipulation. Scandals like the Quiz Show Scandal and propaganda use during totalitarian regimes led to the creation of public broadcasting and the Fairness Doctrine.

      The history of media, particularly radio and television, shows a pattern of initial infatuation and optimism, followed by disillusionment and a call for regulation or ethical standards, in response to the potential for misuse and manipulation. The Quiz Show Scandal in the 1950s served as a profound wake-up call for the public and led to the creation of public broadcasting as an alternative to commercial media. Similarly, the use of radio for propaganda during the 1930s and 1940s, especially by totalitarian regimes, led to the implementation of the Fairness Doctrine to ensure balanced and fair coverage of controversial issues. These pendulum swings reflect the ongoing tension between the democratic ideal of free and accessible media, and the need for accountability and ethical standards to prevent misuse and manipulation.

    • Historical reactions to ad excessGiving people control over ad exposure can have unintended consequences, and public regulatory approaches can also be effective in managing ad impact

      Throughout history, when people were exposed to an abundance of advertising, whether it was on radio, television, or even through posters, the initial reaction was often disgust and a desire for control. At first, sponsorships and innovative devices like the remote control were used to give people some control over their exposure to ads. However, this led to unintended consequences, such as fragmented attention and channel surfing. In Paris, the proliferation of posters led to a similar reaction, resulting in laws restricting where posters could be displayed. These examples show that while giving people control over their exposure to ads can be well-intentioned, it may not always lead to the desired outcome and that public regulatory approaches can also be effective in managing the impact of advertising on people's lives.

    • Balancing commercial interests and public good in digital worldConsider the impact of advertising and privacy invasion on relationships, families, and mental health. Set boundaries to protect personal areas for a healthy society.

      We need to consider the balance between commercial interests and the public interest in our digital world, particularly when it comes to advertising and the invasion of privacy. The speaker draws a parallel to zoning laws and the need to protect certain areas for the greater good. The evolution of advertising from posters to TV to social media has allowed for more intrusion into our personal lives and deeper targeting of our emotions. We've accepted this unrestricted mining of human consciousness and attention, but there's a concern about the concentration of this power in a few entities. It's important to consider the impact on our relationships, our families, and our mental health. The speaker is not advocating for a total transcendence of advertising or commerce, but rather for careful zoning and setting boundaries. We need to make hard decisions about what areas of our lives should be off-limits and protect them for the sake of a healthy society.

    • Historical concentration of power in communication channels leads to dangerous political consequencesMonopolies in attention markets can lead to economic suffering, an appetite for strong leaders, and easier government manipulation of public opinion, with historical examples including the Soviet Union and Hitler's rise.

      The concentration of power in attention markets, whether through monopolies or oligopolies, can have dangerous political consequences. Historically, this consolidation of power has been linked to the rise of populism, extremism, and even fascism. This is because monopolies tend to aggregate profits, leading to widespread economic suffering and an appetite for strong leaders who promise to redress the balance. In the context of the attentional economy, the monopolization of communication channels makes it easier for governments to control and manipulate public opinion, creating a dangerous concentration of power. The historical examples of this phenomenon, from the Soviet Union to the rise of Hitler, should serve as a warning. Therefore, efforts to break up tech companies or regulate their power should not be dismissed as mere competition concerns, but rather as a necessary step to prevent the dangerous consolidation of power.

    • Rebranding Antitrust Regulations as Anti-Monopoly or Anti-Private Power ControlAntitrust regulations should be renamed to reflect their true purpose of preventing monopolies and private power control, with a focus on potential risks to society and economic burdens on the middle class.

      Antitrust regulations should be rebranded as "anti-monopoly" or "anti-private power control" to better reflect their true purpose of preventing dangerous concentrations of power. This conversation extended to the tech industry, specifically Facebook's political advertising policy, which allows false political ads while rejecting non-political ones. The concern is that Facebook may continue to allow these ads to maintain good relations with the government, even though it may not be primarily for financial gain. The discussion also touched upon other industries, such as broadband, which can also create monopolies and significantly impact household budgets. Overall, the conversation emphasized the importance of addressing monopolies and their potential risks to society, including geopolitical dangers and economic burdens on the middle class.

    • Tech companies' decisions on political ads impact presidency and public discourseTech companies should consider ethical codes and regulations for political ads to maintain neutrality and credibility, or remove themselves from the political advertising game.

      The decisions made by tech companies, specifically regarding political advertising, can significantly impact the presidency and public discourse. The lack of oversight and ethical codes in tech is concerning, especially when it comes to unchecked political ads that can be used for manipulation. Employees of these tech companies, such as Facebook, have the power to advocate for ethical standards and regulations. The comparison to professions like journalism, law, and medicine, which have their own ethical codes and are regulated, highlights the need for tech to follow suit. The continued running of political ads, especially those that are potentially manipulative or misleading, puts these companies in a position of constant political pressure and undermines their credibility as neutral platforms. To effectively address this issue, tech companies should consider removing themselves from the political advertising game altogether or implementing strict ethical codes and fact-checking measures. The historical evolution of journalism provides a roadmap for how tech can move towards a publicly-interested internet.

    • Tech Prioritizing Profits Over People's InterestsFormer Google design ethicist Tristan Harris advocates for a new generation of tech tools that prioritize connecting people in helpful ways without ulterior motives, achieved through a paid or public model, better-designed tools, stronger regulations, and more industry focus on people's needs.

      Our current technology, particularly social media, may be negatively impacting society and children's development by prioritizing profits over people's interests. Tristan Harris, a former Google design ethicist, suggests we need a new generation of tech tools that prioritize connecting people in helpful ways without ulterior motives. He believes this could be achieved through a paid or public model, similar to PBS or NPR. Harris also advocates for better-designed tools and stronger regulations, such as antitrust investigations and breaking up tech companies, to create a more humane and balanced internet. Ultimately, Harris calls for a shift in the tech industry's priorities to serve people's needs and improve relationships, rather than maximizing profits and user engagement at all costs. This change is not only important for the future of technology but also for the future of humanity and civilization.

    • Unintended Consequences of Targeted Advertising and Data CollectionThe tech industry's focus on targeted advertising and data collection had negative consequences for privacy, journalistic integrity, and the ecosystem, leading to a shift towards subscription-based or ad-free models and a renewed emphasis on building tools that make people's lives better.

      The tech industry's focus on targeted advertising and data collection, driven by the dominance of companies like Facebook and Google, had unintended negative consequences for the ecosystem, privacy, and journalistic integrity. This led to a shift towards subscription-based or ad-free models and a renewed emphasis on building tools that make people's lives better. The public's growing awareness of these issues, fueled by advocacy groups like the Center for Humane Technology, may mark a turning point in the tech industry's evolution, leading to a more ethical and human-centered approach.

    • A more humane and ethically responsible internetIn the next five years, we may see a shift towards a more responsible culture for software engineers, limited commercialization on the internet, and the integration of ethics into computer science education.

      The future of the internet lies in making it more public, humane, and ethically responsible. This can be achieved through a combination of legislation, education, and cultural shifts. In five years, we may see a new culture of responsibility for software engineers, with universities integrating ethics into computer science programs. Socially and politically, we may witness the creation of "sacred precincts" on the internet, where commercialization is limited. This could include social media blackouts during elections or even designated days of digital detox. While it may seem impossible to add more time off in our already busy lives, the idea of weekends as a humane technology shows that such changes are not only possible but valuable. The Center for Humane Technology, with the support of its generous donors, is working towards this vision of a more humane internet.

    Recent Episodes from Your Undivided Attention

    Why Are Migrants Becoming AI Test Subjects? With Petra Molnar

    Why Are Migrants Becoming AI Test Subjects? With Petra Molnar

    Climate change, political instability, hunger. These are just some of the forces behind an unprecedented refugee crisis that’s expected to include over a billion people by 2050. In response to this growing crisis, wealthy governments like the US and the EU are employing novel AI and surveillance technologies to slow the influx of migrants at their borders. But will this rollout stop at the border?

    In this episode, Tristan and Aza sit down with Petra Molnar to discuss how borders have become a proving ground for the sharpest edges of technology, and especially AI. Petra is an immigration lawyer and co-creator of the Migration and Technology Monitor. Her new book is “The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence.”

    RECOMMENDED MEDIA

    The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence

    Petra’s newly published book on the rollout of high risk tech at the border.

    Bots at the Gate

    A report co-authored by Petra about Canada’s use of AI technology in their immigration process.

    Technological Testing Grounds

    A report authored by Petra about the use of experimental technology in EU border enforcement.

    Startup Pitched Tasing Migrants from Drones, Video Reveals

    An article from The Intercept, containing the demo for Brinc’s taser drone pilot program.

    The UNHCR

    Information about the global refugee crisis from the UN.

    RECOMMENDED YUA EPISODES

    War is a Laboratory for AI with Paul Scharre

    No One is Immune to AI Harms with Dr. Joy Buolamwini

    Can We Govern AI? With Marietje Schaake

    CLARIFICATION:

    The iBorderCtrl project referenced in this episode was a pilot project that was discontinued in 2019

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    This week, a group of current and former employees from Open AI and Google Deepmind penned an open letter accusing the industry’s leading companies of prioritizing profits over safety. This comes after a spate of high profile departures from OpenAI, including co-founder Ilya Sutskever and senior researcher Jan Leike, as well as reports that OpenAI has gone to great lengths to silence would-be whistleblowers. 

    The writers of the open letter argue that researchers have a “right to warn” the public about AI risks and laid out a series of principles that would protect that right. In this episode, we sit down with one of those writers: William Saunders, who left his job as a research engineer at OpenAI in February. William is now breaking the silence on what he saw at OpenAI that compelled him to leave the company and to put his name to this letter. 

    RECOMMENDED MEDIA 

    The Right to Warn Open Letter 

    My Perspective On "A Right to Warn about Advanced Artificial Intelligence": A follow-up from William about the letter

     Leaked OpenAI documents reveal aggressive tactics toward former employees: An investigation by Vox into OpenAI’s policy of non-disparagement.

    RECOMMENDED YUA EPISODES

    1. A First Step Toward AI Regulation with Tom Wheeler 
    2. Spotlight on AI: What Would It Take For This to Go Well? 
    3. Big Food, Big Tech and Big AI with Michael Moss 
    4. Can We Govern AI? with Marietje Schaake

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    War is a Laboratory for AI with Paul Scharre

    War is a Laboratory for AI with Paul Scharre

    Right now, militaries around the globe are investing heavily in the use of AI weapons and drones.  From Ukraine to Gaza, weapons systems with increasing levels of autonomy are being used to kill people and destroy infrastructure and the development of fully autonomous weapons shows little signs of slowing down. What does this mean for the future of warfare? What safeguards can we put up around these systems? And is this runaway trend toward autonomous warfare inevitable or will nations come together and choose a different path? In this episode, Tristan and Daniel sit down with Paul Scharre to try to answer some of these questions. Paul is a former Army Ranger, the author of two books on autonomous weapons and he helped the Department of Defense write a lot of its policy on the use of AI in weaponry. 

    RECOMMENDED MEDIA

    Four Battlegrounds: Power in the Age of Artificial Intelligence: Paul’s book on the future of AI in war, which came out in 2023.

    Army of None: Autonomous Weapons and the Future of War: Paul’s 2018 book documenting and predicting the rise of autonomous and semi-autonomous weapons as part of modern warfare.

    The Perilous Coming Age of AI Warfare: How to Limit the Threat of Autonomous Warfare: Paul’s article in Foreign Affairs based on his recent trip to the battlefield in Ukraine.

    The night the world almost almost ended: A BBC documentary about Stanislav Petrov’s decision not to start nuclear war.

    AlphaDogfight Trials Final Event: The full simulated dogfight between an AI and human pilot. The AI pilot swept, 5-0.

    RECOMMENDED YUA EPISODES

    1. The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen Hao
    2. Can We Govern AI? with Marietje Schaake
    3. Big Food, Big Tech and Big AI with Michael Moss
    4. The Invisible Cyber-War with Nicole Perlroth

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    AI and Jobs: How to Make AI Work With Us, Not Against Us With Daron Acemoglu

    AI and Jobs: How to Make AI Work With Us, Not Against Us With Daron Acemoglu

    Tech companies say that AI will lead to massive economic productivity gains. But as we know from the first digital revolution, that’s not what happened. Can we do better this time around?

    RECOMMENDED MEDIA

    Power and Progress by Daron Acemoglu and Simon Johnson Professor Acemoglu co-authored a bold reinterpretation of economics and history that will fundamentally change how you see the world

    Can we Have Pro-Worker AI? Professor Acemoglu co-authored this paper about redirecting AI development onto the human-complementary path

    Rethinking Capitalism: In Conversation with Daron Acemoglu The Wheeler Institute for Business and Development hosted Professor Acemoglu to examine how technology affects the distribution and growth of resources while being shaped by economic and social incentives

    RECOMMENDED YUA EPISODES

    1. The Three Rules of Humane Tech
    2. The Tech We Need for 21st Century Democracy
    3. Can We Govern AI?
    4. An Alternative to Silicon Valley Unicorns

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Jonathan Haidt On How to Solve the Teen Mental Health Crisis

    Jonathan Haidt On How to Solve the Teen Mental Health Crisis

    Suicides. Self harm. Depression and anxiety. The toll of a social media-addicted, phone-based childhood has never been more stark. It can be easy for teens, parents and schools to feel like they’re trapped by it all. But in this conversation with Tristan Harris, author and social psychologist Jonathan Haidt makes the case that the conditions that led to today’s teenage mental health crisis can be turned around – with specific, achievable actions we all can take starting today.

    This episode was recorded live at the San Francisco Commonwealth Club.  

    Correction: Tristan mentions that 40 Attorneys General have filed a lawsuit against Meta for allegedly fostering addiction among children and teens through their products. However, the actual number is 42 Attorneys General who are taking legal action against Meta.

    Clarification: Jonathan refers to the Wait Until 8th pledge. By signing the pledge, a parent  promises not to give their child a smartphone until at least the end of 8th grade. The pledge becomes active once at least ten other families from their child’s grade pledge the same.

    Chips Are the Future of AI. They’re Also Incredibly Vulnerable. With Chris Miller

    Chips Are the Future of AI. They’re Also Incredibly Vulnerable. With Chris Miller

    Beneath the race to train and release more powerful AI models lies another race: a race by companies and nation-states to secure the hardware to make sure they win AI supremacy. 

    Correction: The latest available Nvidia chip is the Hopper H100 GPU, which has 80 billion transistors. Since the first commercially available chip had four transistors, the Hopper actually has 20 billion times that number. Nvidia recently announced the Blackwell, which boasts 208 billion transistors - but it won’t ship until later this year.

    RECOMMENDED MEDIA 

    Chip War: The Fight For the World’s Most Critical Technology by Chris Miller

    To make sense of the current state of politics, economics, and technology, we must first understand the vital role played by chips

    Gordon Moore Biography & Facts

    Gordon Moore, the Intel co-founder behind Moore's Law, passed away in March of 2023

    AI’s most popular chipmaker Nvidia is trying to use AI to design chips faster

    Nvidia's GPUs are in high demand - and the company is using AI to accelerate chip production

    RECOMMENDED YUA EPISODES

    Future-proofing Democracy In the Age of AI with Audrey Tang

    How Will AI Affect the 2024 Elections? with Renee DiResta and Carl Miller

    The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen Hao

    Protecting Our Freedom of Thought with Nita Farahany

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

     

     

    Future-proofing Democracy In the Age of AI with Audrey Tang

    Future-proofing Democracy In the Age of AI with Audrey Tang

    What does a functioning democracy look like in the age of artificial intelligence? Could AI even be used to help a democracy flourish? Just in time for election season, Taiwan’s Minister of Digital Affairs Audrey Tang returns to the podcast to discuss healthy information ecosystems, resilience to cyberattacks, how to “prebunk” deepfakes, and more. 

    RECOMMENDED MEDIA 

    Testing Theories of American Politics: Elites, Interest Groups, and Average Citizens by Martin Gilens and Benjamin I. Page

    This academic paper addresses tough questions for Americans: Who governs? Who really rules? 

    Recursive Public

    Recursive Public is an experiment in identifying areas of consensus and disagreement among the international AI community, policymakers, and the general public on key questions of governance

    A Strong Democracy is a Digital Democracy

    Audrey Tang’s 2019 op-ed for The New York Times

    The Frontiers of Digital Democracy

    Nathan Gardels interviews Audrey Tang in Noema

    RECOMMENDED YUA EPISODES 

    Digital Democracy is Within Reach with Audrey Tang

    The Tech We Need for 21st Century Democracy with Divya Siddarth

    How Will AI Affect the 2024 Elections? with Renee DiResta and Carl Miller

    The AI Dilemma

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    U.S. Senators Grilled Social Media CEOs. Will Anything Change?

    U.S. Senators Grilled Social Media CEOs. Will Anything Change?

    Was it political progress, or just political theater? The recent Senate hearing with social media CEOs led to astonishing moments — including Mark Zuckerberg’s public apology to families who lost children following social media abuse. Our panel of experts, including Facebook whistleblower Frances Haugen, untangles the explosive hearing, and offers a look ahead, as well. How will this hearing impact protocol within these social media companies? How will it impact legislation? In short: will anything change?

    Clarification: Julie says that shortly after the hearing, Meta’s stock price had the biggest increase of any company in the stock market’s history. It was the biggest one-day gain by any company in Wall Street history.

    Correction: Frances says it takes Snap three or four minutes to take down exploitative content. In Snap's most recent transparency report, they list six minutes as the median turnaround time to remove exploitative content.

    RECOMMENDED MEDIA 

    Get Media Savvy

    Founded by Julie Scelfo, Get Media Savvy is a non-profit initiative working to establish a healthy media environment for kids and families

    The Power of One by Frances Haugen

    The inside story of France’s quest to bring transparency and accountability to Big Tech

    RECOMMENDED YUA EPISODES

    Real Social Media Solutions, Now with Frances Haugen

    A Conversation with Facebook Whistleblower Frances Haugen

    Are the Kids Alright?

    Social Media Victims Lawyer Up with Laura Marquez-Garrett

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

     

     

    Taylor Swift is Not Alone: The Deepfake Nightmare Sweeping the Internet

    Taylor Swift is Not Alone: The Deepfake Nightmare Sweeping the Internet

    Over the past year, a tsunami of apps that digitally strip the clothes off real people has hit the market. Now anyone can create fake non-consensual sexual images in just a few clicks. With cases proliferating in high schools, guest presenter Laurie Segall talks to legal scholar Mary Anne Franks about the AI-enabled rise in deep fake porn and what we can do about it. 

    Correction: Laurie refers to the app 'Clothes Off.' It’s actually named Clothoff. There are many clothes remover apps in this category.

    RECOMMENDED MEDIA 

    Revenge Porn: The Cyberwar Against Women

    In a five-part digital series, Laurie Segall uncovers a disturbing internet trend: the rise of revenge porn

    The Cult of the Constitution

    In this provocative book, Mary Anne Franks examines the thin line between constitutional fidelity and constitutional fundamentalism

    Fake Explicit Taylor Swift Images Swamp Social Media

    Calls to protect women and crack down on the platforms and technology that spread such images have been reignited

    RECOMMENDED YUA EPISODES 

    No One is Immune to AI Harms

    Esther Perel on Artificial Intimacy

    Social Media Victims Lawyer Up

    The AI Dilemma

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Can Myth Teach Us Anything About the Race to Build Artificial General Intelligence? With Josh Schrei

    Can Myth Teach Us Anything About the Race to Build Artificial General Intelligence? With Josh Schrei

    We usually talk about tech in terms of economics or policy, but the casual language tech leaders often use to describe AI — summoning an inanimate force with the powers of code — sounds more... magical. So, what can myth and magic teach us about the AI race? Josh Schrei, mythologist and host of The Emerald podcast,  says that foundational cultural tales like "The Sorcerer's Apprentice" or Prometheus teach us the importance of initiation, responsibility, human knowledge, and care.  He argues these stories and myths can guide ethical tech development by reminding us what it is to be human. 

    Correction: Josh says the first telling of "The Sorcerer’s Apprentice" myth dates back to ancient Egypt, but it actually dates back to ancient Greece.

    RECOMMENDED MEDIA 

    The Emerald podcast

    The Emerald explores the human experience through a vibrant lens of myth, story, and imagination

    Embodied Ethics in The Age of AI

    A five-part course with The Emerald podcast’s Josh Schrei and School of Wise Innovation’s Andrew Dunn

    Nature Nurture: Children Can Become Stewards of Our Delicate Planet

    A U.S. Department of the Interior study found that the average American kid can identify hundreds of corporate logos but not plants and animals

    The New Fire

    AI is revolutionizing the world - here's how democracies can come out on top. This upcoming book was authored by an architect of President Biden's AI executive order

    RECOMMENDED YUA EPISODES 

    How Will AI Affect the 2024 Elections?

    The AI Dilemma

    The Three Rules of Humane Tech

    AI Myths and Misconceptions

     

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Related Episodes

    #113 - "Pocket Lining"

    #113 - "Pocket Lining"
    On this episode: The ”Poddin' Next Door" crew opens with the usual banter and slappers for your head tops. New Houston drip, Fortnite grand theft auto, lining pockets, lab diamond inflation, long covid, childhood dreams, Digital monopoly, LGBTQ stats, China. Listen on most Digital Streaming Platforms. Apple, Amazon, Spotify, Google…… Follow + Subscribe: Instagram - @poddinnextdoor YouTube - Poddin’ Next Door