Logo
    Search

    Podcast Summary

    • TikTok: A Social Media Platform with National Security RisksGovernments and institutions are addressing concerns over TikTok's national security risks, including spying, hidden state media, and Chinese law requiring data sharing or government service.

      TikTok, a China-owned social media platform, poses significant national security risks, and governments and institutions are taking steps to mitigate these risks. Evidence includes spying on US journalists, hidden state media accounts influencing elections, and Chinese law requiring companies like ByteDance, TikTok's owner, to share information or serve as tools of the Chinese government. The concern is not just about hard power but also the soft power of influencing values and moral consensus. TikTok's unique design, with its prominent algorithm and exclusive video consumption, sets it apart from other social platforms. As open societies consider banning TikTok, there's a need to address accusations of authoritarianism and maintain the balance between security and individual freedoms.

    • TikTok's Unique Features and Effective AlgorithmTikTok's recommendation algorithm uses user data from swiping behaviors to deliver personalized content at unprecedented speed and accuracy, setting it apart from other social media platforms where users follow specific accounts to receive content.

      TikTok is a unique social media platform that sets itself apart from others like Instagram, Twitter, and YouTube through its focus on a recommendation algorithm, the ability for anyone to go viral, and its marketing positioning towards a younger audience. The algorithm's effectiveness is due in part to the vast amount of user data it collects through users' swiping behaviors, which allows it to identify and deliver content that keeps users engaged at an unprecedented speed and accuracy. Unlike other platforms where users follow specific accounts to receive content, TikTok sources videos from a global pool and uses machine learning to determine which ones are most addictive or entertaining. This results in a personalized feed that can quickly detect and cater to users' interests without explicit input. Additionally, TikTok's lack of emphasis on friends' content and its ability to facilitate virality make it a platform where anyone can potentially gain a large following quickly, but where success can also be short-lived. Overall, TikTok's success can be attributed to its ability to gather and use data efficiently to deliver personalized content and its unique features that set it apart from other social media platforms.

    • Social media platforms like TikTok use algorithms and design elements to influence users and predict niche interestsSocial media platforms, such as TikTok, employ persuasive techniques like social reciprocity and music to influence users and can manipulate public perception during crises by controlling access to content and creating echo chambers, limiting users' exposure to opposing viewpoints.

      Social media platforms like TikTok have the ability to use algorithms and design elements to influence users in subtle ways, even predicting niche interests. The platform's use of social reciprocity and music are examples of persuasive techniques. In the event of war or crisis, these platforms can manipulate public perception by controlling access to content and creating echo chambers. For instance, during the Russian invasion of Ukraine in 2022, TikTok created a separate version of the app for Russia, implementing an upload ban and restricting international content, leading to a Russian-only bubble and limiting users' exposure to opposing viewpoints. This lack of transparency and manipulation of information can have significant consequences, particularly in critical situations.

    • TikTok's Shadow Promotion of Pro-Kremlin ContentDespite a policy change banning international content, some pro-Kremlin content still managed to be promoted on TikTok through a phenomenon called shadow promotion, raising concerns about transparency and accountability in content moderation and promotion.

      TikTok's policy change in response to Russia's fake news law resulted in the ban of international content, but some content, particularly pro-Kremlin content, still managed to be promoted through the For You algorithm. This phenomenon, called shadow promotion, raises concerns about opacity and the need for greater transparency regarding content moderation and promotion on the platform. The implications of this are significant, as it highlights the challenge of ensuring that content that is banned or deemed inappropriate is not still being promoted to users, especially in the context of geopolitical tensions and contentious issues. The lack of transparency around content moderation and promotion on social media platforms is a critical issue, particularly as governments and regulatory bodies seek to address the spread of misinformation and other harmful content online. The need for independent oversight and accountability is essential to ensure that platforms are living up to their commitments to users and maintaining the integrity of their platforms.

    • Ensuring Transparency and Trust in Social Media PlatformsRegulation, researcher access, and adversarial audits are crucial for ensuring transparency and trust in social media platforms like TikTok, as they address concerns over data manipulation, political influence, and spread of misinformation.

      The issue of transparency and trust in social media platforms, like TikTok, is a significant concern due to the vast amount of content and the ability for the platforms to quickly change recommendation systems. Regulation requiring platforms to open their data and provide researcher access is a promising development, but the integrity of these interfaces is questionable. Adversarial audits, which involve collecting data independently of the platform, are necessary to ensure accountability. During the French elections, TikTok's political influence was evident, and the platform's claim of being non-political was debunked. The risks associated with social media in elections include the spread of misinformation, manipulation of content, and potential interference in democratic processes. It is crucial to continue monitoring and finding ways to ensure transparency and trust in these platforms.

    • TikTok's Impact on Political Discourse and ElectionsTikTok's role in shaping political narratives is significant, with one billion views of election content in France, but it remains a free market for political content, leading to the amplification of polarizing and divisive content.

      TikTok's role in political discourse and elections cannot be underestimated, despite the platform's efforts to present itself as merely a source of entertainment. With an estimated one billion views of election-related content in France, which has only 65 million inhabitants, it's clear that TikTok is a significant player in shaping political narratives. However, unlike YouTube, which acknowledges its role in disseminating political information and boosts authoritative sources, TikTok remains a free market for political content. This can lead to the amplification of polarizing and divisive content, as was seen during the French election with far-right candidate Eric Zemmour receiving 30% of the engagement despite collecting only 7% of the votes. The implications of this trend extend beyond individual elections and raise questions about how to address the influence of apps based in countries of concern on critical infrastructure and global opinion.

    • Exploring alternative models for social media platformsSocial media platforms, driven by profit and opaque algorithms, pose risks to individual and societal well-being. Transparency is necessary but not enough. Alternative models like public goods or user-controlled systems can create a fair and trustworthy digital landscape. Balance between individual and shared realities is essential to prevent fragmentation.

      Social media platforms, particularly those driven by profit and opaque algorithms, can pose significant risks to individual and societal well-being, similar to critical telecommunications infrastructure. Transparency is a necessary first step, but it's not enough. We need to explore alternative models, such as public goods or user-controlled systems, to create a fair and trustworthy digital landscape. The success of platforms like Wikipedia, which operates in the public interest, illustrates the potential of such models. However, it's important to strike a balance between individual and shared realities, ensuring a diverse range of perspectives and preventing the fragmentation of our collective understanding. While there are ongoing efforts to develop alternative systems, more research and innovation are needed to create effective, user-friendly solutions. Ultimately, it's crucial to recognize the importance of digital infrastructure and its potential impact on our lives, and to approach its regulation with a holistic, long-term perspective.

    • Designing algorithms for a shared realityAlgorithms should encourage a shared reality, diverse perspectives, and epistemic humility. Content moderation, transparency, and democratically run design codes are essential components of this design.

      As we continue to rely on algorithms to curate and deliver information to us, it's crucial that these systems are designed with a democratic and collaborative approach in mind. This means ensuring that algorithms enable a shared reality, encourage diverse perspectives, and promote epistemic humility. Content moderation and transparency are essential components of this design, but it's also important to remember that people will seek out the information they're interested in. Default settings should encourage perspective expansion rather than separation or division. Regulators are beginning to grapple with these issues, but more needs to be done to ensure that algorithms are designed with the public's best interests in mind. This includes democratically run design codes, transparency requirements, and standards for collaborative sense making. Ultimately, the goal is to create an information ecosystem that increases tolerance, cultivates the virtues of citizenship, and encourages lifelong learning and curiosity.

    • Regulating Computational Propaganda and Information WarfareRegulators in EU and US recognize the threat of computational propaganda and info warfare from Russia. They're developing infrastructure but struggle with censorship dilemmas and resources. Focus on disinformation and content moderation, but need to address algorithms and reach. Systemic regulation of entire industry is a better approach.

      Both in the EU and the US, regulators are recognizing the serious threat and risk of computational propaganda and information warfare, particularly from Russia. They are starting to develop infrastructure to combat these threats but acknowledge they are behind in resources and methods compared to their adversaries. There is a moral dilemma regarding censorship and freedom of speech, and the arbitrary power wielded by social media platforms. Regulators are struggling to create effective laws for social media, with Germany's Netsdag being an example. The focus has been on disinformation and content moderation, but there is a lack of attention on the role of algorithms and content reach. The better approach, according to the speaker, is to have a systemic regulation of the entire industry to ensure respect for democracy. Banning or forcing the sale of specific platforms like TikTok is not a sustainable solution, as another similarly concerning platform may emerge. Instead, the focus should be on creating common regulations and guidelines for social media to strengthen democracy and not just make it less toxic.

    • TikTok: Serious Concerns and RecommendationsExperts recommend a ban or forced sale of TikTok due to Chinese origins, but emphasize transparency and stronger guardrails for social media platforms. Be intentional about content consumption and prioritize consent.

      There are serious concerns regarding the operations and potential risks of the social media platform TikTok. Mark Fuhrman, a tech expert, strongly recommends a ban or forced sale of the app to ensure complete separation from its Chinese origins. However, he also emphasizes the importance of transparency and better guardrails for social media platforms to strengthen democratic societies. Fuhrman suggests being more intentional about the content we consume, similar to how we consider our food diet. He urges policymakers, TikTok employees, and the public to be aware of the underlying infrastructure of content and prioritize consent. For more information, check out the reports and open-source tools from the Center for Humane Technology. To learn more about creating more humane technology, take the free course, Foundations of Humane Technology, at humaneetech.com/course.

    • Exploring Effective CommunicationEffective communication requires active listening, empathy, clarity, positive tone, and open body language to build strong relationships.

      Effective communication is key to building strong relationships, whether it's in a personal or professional setting. During our discussion, we explored various aspects of communication, including active listening, empathy, and clarity. Active listening involves fully concentrating on what the other person is saying without interruption or judgment. Empathy means putting yourself in someone else's shoes and understanding their perspective. Clarity involves being clear and concise in your message to avoid misunderstandings. Moreover, we also discussed the importance of body language and tone in communication. Nonverbal cues can often convey more meaning than words themselves. A positive tone and open body language can help build trust and foster a positive connection. Lastly, we touched on the role of technology in communication and how it can both enhance and hinder our ability to connect with others. While technology allows us to communicate with people from all over the world, it can also lead to miscommunications and misunderstandings due to the lack of nonverbal cues. In summary, effective communication is a two-way street that requires active listening, empathy, clarity, positive tone, and open body language. By focusing on these elements, we can build stronger relationships and bridge any gaps that may arise in our communication.

    Recent Episodes from Your Undivided Attention

    Why Are Migrants Becoming AI Test Subjects? With Petra Molnar

    Why Are Migrants Becoming AI Test Subjects? With Petra Molnar

    Climate change, political instability, hunger. These are just some of the forces behind an unprecedented refugee crisis that’s expected to include over a billion people by 2050. In response to this growing crisis, wealthy governments like the US and the EU are employing novel AI and surveillance technologies to slow the influx of migrants at their borders. But will this rollout stop at the border?

    In this episode, Tristan and Aza sit down with Petra Molnar to discuss how borders have become a proving ground for the sharpest edges of technology, and especially AI. Petra is an immigration lawyer and co-creator of the Migration and Technology Monitor. Her new book is “The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence.”

    RECOMMENDED MEDIA

    The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence

    Petra’s newly published book on the rollout of high risk tech at the border.

    Bots at the Gate

    A report co-authored by Petra about Canada’s use of AI technology in their immigration process.

    Technological Testing Grounds

    A report authored by Petra about the use of experimental technology in EU border enforcement.

    Startup Pitched Tasing Migrants from Drones, Video Reveals

    An article from The Intercept, containing the demo for Brinc’s taser drone pilot program.

    The UNHCR

    Information about the global refugee crisis from the UN.

    RECOMMENDED YUA EPISODES

    War is a Laboratory for AI with Paul Scharre

    No One is Immune to AI Harms with Dr. Joy Buolamwini

    Can We Govern AI? With Marietje Schaake

    CLARIFICATION:

    The iBorderCtrl project referenced in this episode was a pilot project that was discontinued in 2019

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    This week, a group of current and former employees from Open AI and Google Deepmind penned an open letter accusing the industry’s leading companies of prioritizing profits over safety. This comes after a spate of high profile departures from OpenAI, including co-founder Ilya Sutskever and senior researcher Jan Leike, as well as reports that OpenAI has gone to great lengths to silence would-be whistleblowers. 

    The writers of the open letter argue that researchers have a “right to warn” the public about AI risks and laid out a series of principles that would protect that right. In this episode, we sit down with one of those writers: William Saunders, who left his job as a research engineer at OpenAI in February. William is now breaking the silence on what he saw at OpenAI that compelled him to leave the company and to put his name to this letter. 

    RECOMMENDED MEDIA 

    The Right to Warn Open Letter 

    My Perspective On "A Right to Warn about Advanced Artificial Intelligence": A follow-up from William about the letter

     Leaked OpenAI documents reveal aggressive tactics toward former employees: An investigation by Vox into OpenAI’s policy of non-disparagement.

    RECOMMENDED YUA EPISODES

    1. A First Step Toward AI Regulation with Tom Wheeler 
    2. Spotlight on AI: What Would It Take For This to Go Well? 
    3. Big Food, Big Tech and Big AI with Michael Moss 
    4. Can We Govern AI? with Marietje Schaake

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    War is a Laboratory for AI with Paul Scharre

    War is a Laboratory for AI with Paul Scharre

    Right now, militaries around the globe are investing heavily in the use of AI weapons and drones.  From Ukraine to Gaza, weapons systems with increasing levels of autonomy are being used to kill people and destroy infrastructure and the development of fully autonomous weapons shows little signs of slowing down. What does this mean for the future of warfare? What safeguards can we put up around these systems? And is this runaway trend toward autonomous warfare inevitable or will nations come together and choose a different path? In this episode, Tristan and Daniel sit down with Paul Scharre to try to answer some of these questions. Paul is a former Army Ranger, the author of two books on autonomous weapons and he helped the Department of Defense write a lot of its policy on the use of AI in weaponry. 

    RECOMMENDED MEDIA

    Four Battlegrounds: Power in the Age of Artificial Intelligence: Paul’s book on the future of AI in war, which came out in 2023.

    Army of None: Autonomous Weapons and the Future of War: Paul’s 2018 book documenting and predicting the rise of autonomous and semi-autonomous weapons as part of modern warfare.

    The Perilous Coming Age of AI Warfare: How to Limit the Threat of Autonomous Warfare: Paul’s article in Foreign Affairs based on his recent trip to the battlefield in Ukraine.

    The night the world almost almost ended: A BBC documentary about Stanislav Petrov’s decision not to start nuclear war.

    AlphaDogfight Trials Final Event: The full simulated dogfight between an AI and human pilot. The AI pilot swept, 5-0.

    RECOMMENDED YUA EPISODES

    1. The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen Hao
    2. Can We Govern AI? with Marietje Schaake
    3. Big Food, Big Tech and Big AI with Michael Moss
    4. The Invisible Cyber-War with Nicole Perlroth

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    AI and Jobs: How to Make AI Work With Us, Not Against Us With Daron Acemoglu

    AI and Jobs: How to Make AI Work With Us, Not Against Us With Daron Acemoglu

    Tech companies say that AI will lead to massive economic productivity gains. But as we know from the first digital revolution, that’s not what happened. Can we do better this time around?

    RECOMMENDED MEDIA

    Power and Progress by Daron Acemoglu and Simon Johnson Professor Acemoglu co-authored a bold reinterpretation of economics and history that will fundamentally change how you see the world

    Can we Have Pro-Worker AI? Professor Acemoglu co-authored this paper about redirecting AI development onto the human-complementary path

    Rethinking Capitalism: In Conversation with Daron Acemoglu The Wheeler Institute for Business and Development hosted Professor Acemoglu to examine how technology affects the distribution and growth of resources while being shaped by economic and social incentives

    RECOMMENDED YUA EPISODES

    1. The Three Rules of Humane Tech
    2. The Tech We Need for 21st Century Democracy
    3. Can We Govern AI?
    4. An Alternative to Silicon Valley Unicorns

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Jonathan Haidt On How to Solve the Teen Mental Health Crisis

    Jonathan Haidt On How to Solve the Teen Mental Health Crisis

    Suicides. Self harm. Depression and anxiety. The toll of a social media-addicted, phone-based childhood has never been more stark. It can be easy for teens, parents and schools to feel like they’re trapped by it all. But in this conversation with Tristan Harris, author and social psychologist Jonathan Haidt makes the case that the conditions that led to today’s teenage mental health crisis can be turned around – with specific, achievable actions we all can take starting today.

    This episode was recorded live at the San Francisco Commonwealth Club.  

    Correction: Tristan mentions that 40 Attorneys General have filed a lawsuit against Meta for allegedly fostering addiction among children and teens through their products. However, the actual number is 42 Attorneys General who are taking legal action against Meta.

    Clarification: Jonathan refers to the Wait Until 8th pledge. By signing the pledge, a parent  promises not to give their child a smartphone until at least the end of 8th grade. The pledge becomes active once at least ten other families from their child’s grade pledge the same.

    Chips Are the Future of AI. They’re Also Incredibly Vulnerable. With Chris Miller

    Chips Are the Future of AI. They’re Also Incredibly Vulnerable. With Chris Miller

    Beneath the race to train and release more powerful AI models lies another race: a race by companies and nation-states to secure the hardware to make sure they win AI supremacy. 

    Correction: The latest available Nvidia chip is the Hopper H100 GPU, which has 80 billion transistors. Since the first commercially available chip had four transistors, the Hopper actually has 20 billion times that number. Nvidia recently announced the Blackwell, which boasts 208 billion transistors - but it won’t ship until later this year.

    RECOMMENDED MEDIA 

    Chip War: The Fight For the World’s Most Critical Technology by Chris Miller

    To make sense of the current state of politics, economics, and technology, we must first understand the vital role played by chips

    Gordon Moore Biography & Facts

    Gordon Moore, the Intel co-founder behind Moore's Law, passed away in March of 2023

    AI’s most popular chipmaker Nvidia is trying to use AI to design chips faster

    Nvidia's GPUs are in high demand - and the company is using AI to accelerate chip production

    RECOMMENDED YUA EPISODES

    Future-proofing Democracy In the Age of AI with Audrey Tang

    How Will AI Affect the 2024 Elections? with Renee DiResta and Carl Miller

    The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen Hao

    Protecting Our Freedom of Thought with Nita Farahany

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

     

     

    Future-proofing Democracy In the Age of AI with Audrey Tang

    Future-proofing Democracy In the Age of AI with Audrey Tang

    What does a functioning democracy look like in the age of artificial intelligence? Could AI even be used to help a democracy flourish? Just in time for election season, Taiwan’s Minister of Digital Affairs Audrey Tang returns to the podcast to discuss healthy information ecosystems, resilience to cyberattacks, how to “prebunk” deepfakes, and more. 

    RECOMMENDED MEDIA 

    Testing Theories of American Politics: Elites, Interest Groups, and Average Citizens by Martin Gilens and Benjamin I. Page

    This academic paper addresses tough questions for Americans: Who governs? Who really rules? 

    Recursive Public

    Recursive Public is an experiment in identifying areas of consensus and disagreement among the international AI community, policymakers, and the general public on key questions of governance

    A Strong Democracy is a Digital Democracy

    Audrey Tang’s 2019 op-ed for The New York Times

    The Frontiers of Digital Democracy

    Nathan Gardels interviews Audrey Tang in Noema

    RECOMMENDED YUA EPISODES 

    Digital Democracy is Within Reach with Audrey Tang

    The Tech We Need for 21st Century Democracy with Divya Siddarth

    How Will AI Affect the 2024 Elections? with Renee DiResta and Carl Miller

    The AI Dilemma

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    U.S. Senators Grilled Social Media CEOs. Will Anything Change?

    U.S. Senators Grilled Social Media CEOs. Will Anything Change?

    Was it political progress, or just political theater? The recent Senate hearing with social media CEOs led to astonishing moments — including Mark Zuckerberg’s public apology to families who lost children following social media abuse. Our panel of experts, including Facebook whistleblower Frances Haugen, untangles the explosive hearing, and offers a look ahead, as well. How will this hearing impact protocol within these social media companies? How will it impact legislation? In short: will anything change?

    Clarification: Julie says that shortly after the hearing, Meta’s stock price had the biggest increase of any company in the stock market’s history. It was the biggest one-day gain by any company in Wall Street history.

    Correction: Frances says it takes Snap three or four minutes to take down exploitative content. In Snap's most recent transparency report, they list six minutes as the median turnaround time to remove exploitative content.

    RECOMMENDED MEDIA 

    Get Media Savvy

    Founded by Julie Scelfo, Get Media Savvy is a non-profit initiative working to establish a healthy media environment for kids and families

    The Power of One by Frances Haugen

    The inside story of France’s quest to bring transparency and accountability to Big Tech

    RECOMMENDED YUA EPISODES

    Real Social Media Solutions, Now with Frances Haugen

    A Conversation with Facebook Whistleblower Frances Haugen

    Are the Kids Alright?

    Social Media Victims Lawyer Up with Laura Marquez-Garrett

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

     

     

    Taylor Swift is Not Alone: The Deepfake Nightmare Sweeping the Internet

    Taylor Swift is Not Alone: The Deepfake Nightmare Sweeping the Internet

    Over the past year, a tsunami of apps that digitally strip the clothes off real people has hit the market. Now anyone can create fake non-consensual sexual images in just a few clicks. With cases proliferating in high schools, guest presenter Laurie Segall talks to legal scholar Mary Anne Franks about the AI-enabled rise in deep fake porn and what we can do about it. 

    Correction: Laurie refers to the app 'Clothes Off.' It’s actually named Clothoff. There are many clothes remover apps in this category.

    RECOMMENDED MEDIA 

    Revenge Porn: The Cyberwar Against Women

    In a five-part digital series, Laurie Segall uncovers a disturbing internet trend: the rise of revenge porn

    The Cult of the Constitution

    In this provocative book, Mary Anne Franks examines the thin line between constitutional fidelity and constitutional fundamentalism

    Fake Explicit Taylor Swift Images Swamp Social Media

    Calls to protect women and crack down on the platforms and technology that spread such images have been reignited

    RECOMMENDED YUA EPISODES 

    No One is Immune to AI Harms

    Esther Perel on Artificial Intimacy

    Social Media Victims Lawyer Up

    The AI Dilemma

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Can Myth Teach Us Anything About the Race to Build Artificial General Intelligence? With Josh Schrei

    Can Myth Teach Us Anything About the Race to Build Artificial General Intelligence? With Josh Schrei

    We usually talk about tech in terms of economics or policy, but the casual language tech leaders often use to describe AI — summoning an inanimate force with the powers of code — sounds more... magical. So, what can myth and magic teach us about the AI race? Josh Schrei, mythologist and host of The Emerald podcast,  says that foundational cultural tales like "The Sorcerer's Apprentice" or Prometheus teach us the importance of initiation, responsibility, human knowledge, and care.  He argues these stories and myths can guide ethical tech development by reminding us what it is to be human. 

    Correction: Josh says the first telling of "The Sorcerer’s Apprentice" myth dates back to ancient Egypt, but it actually dates back to ancient Greece.

    RECOMMENDED MEDIA 

    The Emerald podcast

    The Emerald explores the human experience through a vibrant lens of myth, story, and imagination

    Embodied Ethics in The Age of AI

    A five-part course with The Emerald podcast’s Josh Schrei and School of Wise Innovation’s Andrew Dunn

    Nature Nurture: Children Can Become Stewards of Our Delicate Planet

    A U.S. Department of the Interior study found that the average American kid can identify hundreds of corporate logos but not plants and animals

    The New Fire

    AI is revolutionizing the world - here's how democracies can come out on top. This upcoming book was authored by an architect of President Biden's AI executive order

    RECOMMENDED YUA EPISODES 

    How Will AI Affect the 2024 Elections?

    The AI Dilemma

    The Three Rules of Humane Tech

    AI Myths and Misconceptions

     

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Related Episodes

    MrBeast Graces the Cover of Forbes, TikTok Gets Banned, Instagram Updates and Cameo Launches Cameo Kids!

    MrBeast Graces the Cover of Forbes, TikTok Gets Banned, Instagram Updates and Cameo Launches Cameo Kids!

    In this special episode:   

    • Jimmy Donaldson, MrBeast, is on the cover of Forbes. Launches Gift Cards. And Feastables are back.
    • TikTok is back in the headlines. The Senate Passed a bill to ban TikTok on government devices. What is the Anti-Social CCP Act? We’ll help you out. 
    • Instagram has a ton of new updates. Is it the End of TikTok? Or an attack on Snapchat? 
    • Cameo collabs with Candle Media and creates Cameo Kids! Pretty cool for Cocomelon and Blippi and just kids’ space in general.
    • Dr. Suess’ Vlog – it’s a thing!  

    Check out Jellysmack and their awesome blog!

    Also our sponsor – Amaze.co – Check them out!  

    We have a YouTube Page!  Please subscribe and follow. (Thank you!

    Catch a new episode every Friday on your favorite podcasting site. Please subscribe, like, and share! Visit our website www.creatorupload.com. We love hearing from you!   

    Creator Upload Socials:

    YOUTUBE

    INSTAGRAM

    TIKTOK

    Tech News: NASA Delays Artemis I Launch

    Tech News: NASA Delays Artemis I Launch

    We'll have to wait a while longer before the first of NASA's missions to return us to the Moon takes off. Plus, Brazil tells Apple to include phone chargers or else, a human rights group tells Meta to step up against Brazilian President Bolsonaro's misinformation campaign, and the hardware chief who oversaw the development of the PS5 is retiring. And more!

    See omnystudio.com/listener for privacy information.

    Who Won Vidcon, Follower Meaning on TikTok Versus YouTube and The FCC Uses Twitter to Talk with Apple and Google. Why?

    Who Won Vidcon, Follower Meaning on TikTok Versus YouTube and The FCC Uses Twitter to Talk with Apple and Google. Why?

    In this week’s episode:  

    Jellysmack’s Get In Touch link:  https://jellysmack.com/getintouch/ 

    Catch a new episode every Friday on your favorite podcasting site. Please leave a comment and visit our website www.creatorupload.com – subscribe and send us a message. We love to hear from you!  

    Visit Spri.ng’s Mint-On-Demand yes, one of our AWESOME sponsors! 

    Jellysmack is promoting its amazing Creator Program so please be sure to check it out. 

     

    Creator Upload Socials:

    YOUTUBE

    INSTAGRAM

    TIKTOK

    Spotlight — Addressing the TikTok Threat

    Spotlight — Addressing the TikTok Threat

    Imagine it's the Cold War. Imagine that the Soviet Union puts itself in a position to influence the television programming of the entire Western world — more than a billion viewers. 

    While this might sound like science fiction, it’s representative of the world we're living in, with TikTok being influenced by the Chinese Communist Party.

    TikTok, the flagship app of the Chinese company Bytedance, recently surpassed Google and Facebook as the most popular site on the internet in 2021, and is expected to reach more than 1.8 billion users by the end of 2022. The Chinese government doesn't control TikTok, but has influence over it. What are the implications of this influence, given that China is the main geopolitical rival of the United States?

    This week on Your Undivided Attention, we bring you a bonus episode about TikTok. Co-hosts Tristan Harris and Aza Raskin explore the nature of the TikTok threat, and how we might address it.

    RECOMMENDED MEDIA 

    Pew Research Center's "Teens, Social Media and Technology 2022"

    https://www.pewresearch.org/internet/2022/08/10/teens-social-media-and-technology-2022/

    Pew's recent study on how TikTok has established itself as one of the top online platforms for U.S. teens

    Axios' "Washington turns up the heat on TikTok"

    https://www.axios.com/2022/07/07/congress-tiktok-china-privacy-data?utm_source=substack&utm_medium=email

    Article on recent Congressional responses to the threat of TikTok

    Felix Krause on TikTok's keystroke tracking

    https://twitter.com/KrauseFx/status/1560372509639311366

    A revelation that TikTok has code to observe keypad input and all taps

    RECOMMENDED YUA EPISODES

    A Fresh Take on Tech in China with Rui Ma and Duncan Clark

    https://www.humanetech.com/podcast/44-a-fresh-take-on-tech-in-china

    A Conversation with Facebook Whistleblower Frances Haugen

    https://www.humanetech.com/podcast/42-a-conversation-with-facebook-whistleblower-frances-haugen

    From Russia with Likes (Part 1). Guest: Renée DiResta

    https://www.humanetech.com/podcast/5-from-russia-with-likes-part-1

    From Russia with Likes (Part 2). Guest: Renée DiResta

    https://www.humanetech.com/podcast/6-from-russia-with-likes-part-2
     

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_