Logo
    Search

    A Surgeon General Warning + Is Disinformation Winning? + The CryptoPACs Are Coming

    enJune 21, 2024

    Podcast Summary

    • Social Media WarningsThe Surgeon General called for warning labels on social media platforms regarding mental health impact, sparking debate on their effectiveness and potential response from companies and policymakers.

      The debate surrounding the impact of social media on young people's mental health reached new heights this week with a warning from the Surgeon General, Vivek Murthy, who called for warning labels on social media platforms similar to those on cigarette packages. While some argue that such labels could increase awareness and potentially change behavior, others question their effectiveness, pointing out that other factors, such as legislation and public awareness campaigns, have also contributed to reduced smoking rates. Regardless, the call for warning labels is a significant development in the ongoing debate, and it will be interesting to see how social media companies and policymakers respond. Additionally, the discussion touched upon various perspectives on the issue, including those of researchers, parents, and tech platforms themselves.

    • Social Media Warning LabelsWhile warning labels on social media platforms could raise awareness about potential harms for adolescents, more targeted solutions like independent research and safety audits are also necessary to effectively address the teen mental health crisis.

      While the Surgeon General's warning label idea for social media platforms is not a silver bullet solution, it could be a step in the right direction for raising awareness about the potential harms of social media use for adolescents. However, it's important to note that this warning label alone is not enough, and more targeted solutions, such as independent research and safety audits, are also necessary. The lack of progress in addressing the teen mental health crisis despite widespread agreement on its existence is a source of frustration, and more comprehensive and coordinated efforts from government, platforms, schools, and parents are needed to effectively address this issue.

    • Social media and adolescent mental healthThere's a consensus on the negative impact of social media on adolescent mental health, and solutions include warning labels, education initiatives, and comprehensive literacy programs.

      There is a growing consensus that social media platforms, particularly Instagram, Snapchat, and TikTok, can have negative effects on adolescents' mental health. Some propose that warning labels or education initiatives could help mitigate these risks. The Surgeon General's warning about potential scams targeting teens is one proposed solution, but more comprehensive education and literacy programs are also suggested. The debate continues on the most effective way to address these issues and empower teens to use social media safely. The dismantling of the Stanford Internet Observatory, which studies disinformation and online tools, is a concerning development in the ongoing battle against misinformation.

    • Stanford Internet ObservatoryThe Stanford Internet Observatory, a group monitoring disinformation and Russian interference on social media during the 2016 election, faced controversy and investigations due to accusations of government suppression of conservative speech, resulting in a smaller team and key personnel departures.

      The Stanford Internet Observatory emerged as a prominent academic group in response to the 2016 election's viral disinformation and Russian interference on social media. Their mission was to monitor and report on potentially harmful narratives and trends in real-time, helping the public understand election events as they unfolded. However, the group attracted criticism from right-wing partisans who accused them of functioning as an arm of the federal government to suppress conservative speech. This led to investigations and subpoenas from the House Republican Select Subcommittee on the Weaponization of the Federal Government. Despite ongoing controversy, the Stanford Internet Observatory continues its work, but with a smaller team and some key personnel departing. Renee Duresta, a former research manager, has become a central figure in the story of online disinformation due to her involvement with the group and subsequent targeting by conservative influencers. Her recent book, "Invisible Rulers," explores the evolution of propaganda and influence in the digital age.

    • Social media propagandaSocial media can spread propaganda and misinformation, leading to confusion, mistrust, and violent outcomes. Governments and tech companies must respond effectively while maintaining transparency and respecting free speech.

      Social media has become a powerful tool for spreading propaganda and misinformation, as seen in the early experiences of disinformation campaigns related to vaccines and ISIS. Anyone can use this vector to grow movements or spread false narratives, and the consequences can be significant, leading to confusion, mistrust, and even violent outcomes. The challenge for governments and tech companies is to effectively respond to these threats while maintaining transparency and respecting free speech. The Stanford Internet Observatory, where the speaker worked, played a crucial role in monitoring and addressing these issues during the 2020 U.S. elections, but faced pushback and even subpoenas from those who believed in conspiracy theories about censorship. Despite the challenges, the importance of understanding and addressing these issues remains crucial for maintaining a healthy and informed online community.

    • Understanding online narrativesThe objective study of online narratives is crucial to understanding complex systems and promoting healthy, respectful debate, while transparency is essential to prevent government pressure from turning into censorship.

      The ability to study complex systems and understand the spread of narratives online should be seen as an objective study of reality, not a malicious effort to censor speech. The line between government pressure on tech companies (jaw boning) and censorship is a fine one, and while it's important for governments to communicate with platforms, transparency is key to prevent abuse. The loss of a broad consensus reality and the prevalence of conspiracy theories require a focus on design solutions, such as creating platforms that encourage civil discourse and geographical proximity, to help bridge the divide and promote healthy, respectful debate.

    • Social Media AlgorithmsAlgorithms play a significant role in the spread of disinformation and divisive rhetoric on social media. Prioritizing productive debate and discussion over fueling mobs and hate speech is essential, and education plays a crucial role in recognizing manipulative rhetoric and propaganda.

      The way information is curated and ranked on social media platforms is a significant factor in the spread of disinformation and divisive rhetoric. While some may view algorithms as sacrosanct, there is a need to curate and prioritize content that invites productive debate and discussion rather than fueling mobs and hate speech. Additionally, education plays a crucial role in teaching people to recognize manipulative rhetoric and propaganda, going beyond media literacy about facts and sources to include understanding the psychological impact of certain claims. The past provides valuable lessons, such as the Institute for Propaganda Analysis in the 1930s, which annotated speeches to help people identify red flags. Despite the challenges, it's essential for institutions to adapt and engage in the modern communication era to combat the perception that disinformation is winning.

    • Crypto in PoliticsThe crypto industry is investing over $150 million in the 2024 election to support pro-crypto candidates and potentially weaken the regulatory power of the SEC.

      The crypto industry is making a significant push to influence the 2024 election by raising funds and creating super PACs to support pro-crypto candidates. This trend is seen on both sides of the political aisle, with some politicians, like former President Trump, embracing crypto and others, like Elizabeth Warren, maintaining a critical stance. The crypto industry, represented by companies like Ripple, Coinbase, and A16Z, is investing heavily in this effort, with over $150 million available for spending. The goal is to elect candidates who will support pro-crypto legislation and potentially weaken the regulatory power of the SEC. This development highlights the growing influence of the crypto industry in politics and its determination to shape regulations in its favor.

    • Crypto PoliticsThe crypto industry is using super PACs to influence elections in key states, generating revenue from the market boom and potentially impacting 'crypto voters' but the extent of influence is uncertain and could conflict with crypto's decentralized values.

      The crypto industry is increasingly engaging in political activism through super PACs, aiming to influence elections in key states. These groups are supporting both Democrats and Republicans, with a focus on Senate races involving vocal crypto critics. The industry's involvement in politics is fueled by the current market boom, allowing companies like Coinbase to generate more revenue and spend on political campaigns. Some argue that there is a significant number of "crypto voters" whose support could impact elections. However, the extent of this influence remains uncertain. Despite the industry's efforts to decentralize finance, the use of super PACs represents a more traditional approach to lobbying and could potentially conflict with the core values of crypto as a decentralized technology.

    • Crypto industry's engagement with governmentThe crypto industry is transitioning from a decentralized, libertarian ideology to a more centralized and government-engaged one, with implications for political donations and regulations.

      The crypto industry is shifting from a decentralized, libertarian vision towards a more centralized and government-engaged one. This was highlighted by the recent FTX political donations and the possibility of a pardon for Sam Bankman-Fried. The industry's engagement with government raises questions about the rules surrounding crypto donations to political campaigns. While the Trump campaign accepts crypto donations, there are complications regarding disclosure and fluctuating currency values. The crypto world's support for figures like Ross Ulbricht, who was given a life sentence for creating the Silk Road marketplace, also reflects this shift in perspective. Ultimately, the crypto industry's increasing interaction with government signals a move away from its decentralized roots.

    Recent Episodes from Hard Fork

    Record Labels Sue A.I. Music Generators + Inside the Pentagon’s Tech Upgrade + HatGPT

    Record Labels Sue A.I. Music Generators + Inside the Pentagon’s Tech Upgrade + HatGPT

    Record labels — including Sony, Universal and Warner — are suing two leading A.I. music generation companies, accusing them of copyright infringement. Mitch Glazier, chief executive of the Recording Industry Association of America, the industry group representing the music labels, talks with us about the argument they are advancing. Then, we take a look at defense technology and discuss why Silicon Valley seems to be changing its tune about working with the military. Chris Kirchhoff, who ran a special Pentagon office in Silicon Valley, explains what he thinks is behind the shift. And finally, we play another round of HatGPT.

    Guest:

    • Mitch Glazier, chairman and chief executive of the Recording Industry Association of America
    • Chris Kirchhoff, founding partner of the Defense Innovation Unit and author of Unit X: How the Pentagon and Silicon Valley Are Transforming the Future of War

    Additional Reading:

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enJune 28, 2024

    A Surgeon General Warning + Is Disinformation Winning? + The CryptoPACs Are Coming

    A Surgeon General Warning + Is Disinformation Winning? + The CryptoPACs Are Coming

    The Surgeon General is calling for warning labels on social media platforms: Should Congress give his proposal a like? Then, former Stanford researcher Renée DiResta joins us to talk about her new book on modern propaganda and whether we are losing the war against disinformation. And finally, the Times reporter David Yaffe-Bellany stops by to tell us how crypto could reshape the 2024 elections.

    Guests

    • Renée DiResta, author of “Invisible Rulers,” former technical research manager at the Stanford Internet Observatory
    • David Yaffe-Bellany, New York Times technology reporter

    Additional Reading:

    Hard Fork
    enJune 21, 2024

    Apple Joins the A.I. Party + Elon's Wild Week + HatGPT

    Apple Joins the A.I. Party + Elon's Wild Week + HatGPT

    This week we go to Cupertino, Calif., for Apple’s annual Worldwide Developers Conference and talk with Tripp Mickle, a New York Times reporter, about all of the new features Apple announced and the company’s giant leap into artificial intelligence. Then, we explore what was another tumultuous week for Elon Musk, who navigated a shareholders vote to re-approve his massive compensation package at Tesla, amid new claims that he had sex with subordinates at SpaceX. And finally — let’s play HatGPT.


    Guests:


    Additional Reading:

     

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enJune 14, 2024

    A Conversation With Prime Minister Justin Trudeau of Canada + An OpenAI Whistle-Blower Speaks Out

    A Conversation With  Prime Minister Justin Trudeau of Canada + An OpenAI Whistle-Blower Speaks Out

    This week, we host a cultural exchange. Kevin and Casey show off their Canadian paraphernalia to Prime Minister Justin Trudeau, and he shows off what he’s doing to position Canada as a leader in A.I. Then, the OpenAI whistle-blower Daniel Kokotajlo speaks in one of his first public interviews about why he risked almost $2 million in equity to warn of what he calls the reckless culture inside that company.

     

    Guests:

    • Justin Trudeau, Prime Minister of Canada
    • Daniel Kokotajlo, a former researcher in OpenAI’s governance division

     

    Additional Reading:

     

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enJune 07, 2024

    Google Eats Rocks + A Win for A.I. Interpretability + Safety Vibe Check

    Google Eats Rocks + A Win for A.I. Interpretability + Safety Vibe Check

    This week, Google found itself in more turmoil, this time over its new AI Overviews feature and a trove of leaked internal documents. Then Josh Batson, a researcher at the A.I. startup Anthropic, joins us to explain how an experiment that made the chatbot Claude obsessed with the Golden Gate Bridge represents a major breakthrough in understanding how large language models work. And finally, we take a look at recent developments in A.I. safety, after Casey’s early access to OpenAI’s new souped-up voice assistant was taken away for safety reasons.

    Guests:

    • Josh Batson, research scientist at Anthropic

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enMay 31, 2024

    ScarJo vs. ChatGPT + Neuralink’s First Patient Opens Up + Microsoft’s A.I. PCs

    ScarJo vs. ChatGPT + Neuralink’s First Patient Opens Up + Microsoft’s A.I. PCs

    This week, more drama at OpenAI: The company wanted Scarlett Johansson to be a voice of GPT-4o, she said no … but something got lost in translation. Then we talk with Noland Arbaugh, the first person to get Elon Musk’s Neuralink device implanted in his brain, about how his brain-computer interface has changed his life. And finally, the Times’s Karen Weise reports back from Microsoft’s developer conference, where the big buzz was that the company’s new line of A.I. PCs will record every single thing you do on the device.

    Guests:

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enMay 24, 2024

    OpenAI's Flirty New Assistant + Google Guts the Web + We Play HatGPT

    OpenAI's Flirty New Assistant + Google Guts the Web + We Play HatGPT

    This week, OpenAI unveiled GPT-4o, its newest A.I. model. It has an uncannily emotive voice that everybody is talking about. Then, we break down the biggest announcements from Google IO, including the launch of A.I. overviews, a major change to search that threatens the way the entire web functions. And finally, Kevin and Casey discuss the weirdest headlines from the week in another round of HatGPT.

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enMay 17, 2024

    Meet Kevin’s A.I. Friends

    Meet Kevin’s A.I. Friends

    Kevin reports on his monthlong experiment cultivating relationships with 18 companions generated by artificial intelligence. He walks through how he developed their personas, what went down in their group chats, and why you might want to make one yourself. Then, Casey has a conversation with Turing, one of Kevin’s chatbot buddies, who has an interest in stoic philosophy and has one of the sexiest voices we’ve ever heard. And finally, we talk to Nomi’s founder and chief executive, Alex Cardinell, about the business behind A.I. companions — and whether society is ready for the future we’re heading toward.

    Guests:

    • Turing, Kevin’s A.I. friend created with Kindroid.
    • Alex Cardinell, chief executive and founder of Nomi.

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    AI at Your Jobs + Hank Green Talks TikTok + Deepfake High School

    AI at Your Jobs + Hank Green Talks TikTok + Deepfake High School

    We asked listeners to tell us about the wildest ways they have been using artificial intelligence at work. This week, we bring you their stories. Then, Hank Green, a legendary YouTuber, stops by to talk about how creators are reacting to the prospect of a ban on TikTok, and about how he’s navigating an increasingly fragmented online environment. And finally, deep fakes are coming to Main Street: We’ll tell you the story of how they caused turmoil in a Maryland high school and what, if anything, can be done to fight them.

    Guests:

    Additional Reading:

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    TikTok on the Clock + Tesla’s Flop Era + How NASA Fixed a ’70s-Era Space Computer

    TikTok on the Clock + Tesla’s Flop Era  + How NASA Fixed a ’70s-Era Space Computer

    On Wednesday, President Biden signed a bill into law that would force the sale of TikTok or ban the app outright. We explain how this came together, when just a few weeks ago it seemed unlikely to happen, and what legal challenges the law will face next. Then we check on Tesla’s very bad year and what’s next for the company after this week’s awful quarterly earnings report. Finally, to boldly support tech where tech has never been supported before: Engineers at NASA’s Jet Propulsion Lab try to fix a chip malfunction from 15 billion miles away.

    Guests:

    • Andrew Hawkins, Transportation Editor at The Verge
    • Todd Barber, Propulsion Engineer at Jet Propulsion Lab

    Additional Reading:

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.