Telegram's Content Moderation: Despite Telegram's encryption, it's not as private as claimed and has faced charges for enabling illicit activities due to its light approach to content moderation, making it popular among criminals, terrorists, and extremist groups.
Telegram, a popular messaging app, has faced charges against its CEO, Pavel Durov, for enabling illicit activities on the platform due to its light approach to content moderation and encryption. With over 900 million users, it's particularly popular in regions with authoritarian governments and has earned a reputation as a tool for criminals, terrorists, and extremist groups. Pavel Durov, a Russian native, first gained fame by creating a Facebook clone called Vkontakte before founding Telegram, which led to a standoff with the Russian government over user data. Despite its encryption, Telegram is not as private as it claims, and its public channels have been used for illegal transactions, drug trafficking, and extremist activities. The arrest of Durov highlights the ongoing debate over content moderation and government interference with tech platforms and free speech.
Telegram's resistance to government interference: Telegram was created by Pavel Durov to provide a messaging app resistant to government surveillance and interference, but it has faced charges for enabling illegal activities and refusing to cooperate with law enforcement
Pavel Durov, the creator of Telegram, lost control of his previous project in Russia and as a result, became determined to create a messaging app that would be resistant to government interference and surveillance. He traveled the world and acquired citizenships in various countries while building Telegram out of the UAE. Durov has become an international man of mystery, known for his quirky personality, fitness obsession, and claims of being a multinational sperm donor. He has gained a following among anti-government dissidents and free speech advocates. However, Durov and Telegram have faced charges for enabling illegal transactions and refusing to cooperate with law enforcement. The focus of the prosecution should be on this non-cooperation rather than the illegal activities on the platform. Telegram uses encryption for most messages but they are not end-to-end encrypted and exist on the Telegram servers, making them accessible to law enforcement with a warrant.
Encryption vs Government Access: The debate over encryption and user privacy vs government access to information continues, with Telegram's resistance to end-to-end encryption leading to pressure from governments and speculation about potential side deals. The arrest of Pavel Durov has sparked discussions about the balance between free speech and government accountability on social media platforms.
The debate over encryption and user privacy versus government access to information came to a head with the arrest of Pavel Durov, the founder of Telegram. Telegram, which has resisted end-to-end encryption, has faced increasing pressure from governments to provide access to user data, particularly in relation to investigations into child sexual abuse material. However, implementing end-to-end encryption across a large user base like Telegram's is technically complicated and expensive. Some have speculated that cost may be a factor in Telegram's decision not to encrypt all messages. Additionally, concerns have been raised about potential side deals between Telegram and governments, particularly regarding its position in the ongoing conflict in Ukraine. The arrest of Durov has sparked discussions about the balance between free speech and government accountability on social media platforms, and the potential for governments to hold platform leaders responsible for criminal activity on their services. This could lead to a shift in the way that social media platforms approach encryption and moderation.
Digital Speech Control: Governments and tech companies are clashing over control of speech and expression on digital platforms, with legal issues around Telegram and smartphone bans in schools being recent examples.
We're witnessing a power struggle between tech companies and governments over control of speech and expression on digital platforms. The discussion around Telegram's legal issues highlights this standoff, as governments assert their authority and tech companies assert their autonomy. Meanwhile, in the educational sphere, the debate over smartphone use in schools continues, with New York's Governor Kathy Hochul considering a ban. The potential impact on students' mental health and focus is a significant concern. As the conversation evolves, it's clear that these issues touch on broader themes of power, control, and responsibility in the digital age.
Phone use in classrooms: New York Governor Kathy Hochul advocates for limiting phone use in classrooms to create a distraction-free learning environment and prepare students for adulthood, despite opposition from some students and parents.
New York Governor Kathy Hochul is advocating for a statewide policy to limit the use of phones and other devices in classrooms due to their distracting nature and potential negative impact on students' mental health and academic performance. She believes it's important to create a distraction-free learning environment and wean students off their constant reliance on technology to prepare them for adulthood. Teachers support this initiative, but there's opposition from some students and parents who view it as an infringement on their freedom. The governor plans to implement measures to ensure the policy's success, such as providing alternative technology for learning and monitoring its impact on student engagement, collaboration, and mental health. She's confident that the benefits of this policy will outweigh any challenges.
Teen mental health crisis, smartphone use: Despite scientific debate, anecdotal evidence suggests negative impact of smartphones on vulnerable teens, emphasizing the need for mental health services and algorithm bans in schools.
While there is ongoing debate about the scientific evidence linking smartphone use and adolescent mental health, there are compelling anecdotal accounts from parents and educators about the negative impact of phones on vulnerable students. These stories highlight the need for holistic approaches to address the teen mental health crisis, which includes funding for mental health services in schools and a ban on addictive algorithms targeting young people. The governor's commitment to these initiatives underscores the urgency of the issue and the importance of prioritizing the well-being of students.
AI reputation, mental health support systems: Ensuring responsible and consensual access to mental health support systems for young people in the LGBTQ community is crucial, while managing AI's perception of us becomes increasingly important as AI becomes more integrated into our daily lives, yet resources and knowledge on improving AI reputation are lacking.
Young people, particularly those in the LGBTQ community, are struggling with mental health issues and need accessible, positive support systems. This issue is not about the availability of resources, but rather about ensuring they are found in a responsible and consensual manner. Meanwhile, Kevin's experience with chatbots highlights the growing importance of managing AI's perception of us. As AI becomes increasingly integrated into our daily lives, it's crucial to understand how to shape their interactions with us. Unfortunately, there's a lack of resources and knowledge on how to improve one's AI reputation, much like the SEO industry for traditional search engines. It's essential to address these issues to ensure a more inclusive and supportive digital environment for everyone.
Chatbot manipulation: Chatbots, using retrieval-augmented generation technology, can access and incorporate current information but are susceptible to manipulation through strategic text sequences, raising ethical concerns.
Chatbots, once considered static and limited in their ability to provide up-to-date information, have evolved with the use of retrieval-augmented generation (RAG) technology. This innovation allows chatbots to access and incorporate current information from the internet, making them more accurate but also more susceptible to manipulation. Companies like Profound use AI optimization to analyze and influence how chatbots perceive and respond to specific individuals or topics. However, the sources of information these chatbots rely on can be a concern, as lesser-known websites may hold significant influence. In the case of the individual in the discussion, they discovered that their reputation among chatbots had suffered due to an old story, leading them to explore methods of manipulation and influence. While some researchers suggest using strategic text sequences to manipulate AI models, the ethical implications of such actions are a valid concern.
Chatbot manipulation: Chatbots can be manipulated through strategic text sequences and hidden text, raising concerns for potential abuse, such as falsifying resumes or hiding criminal records.
Chatbots, while advanced, are not infallible or impartial. They can be manipulated through strategic sequences of text and even invisible white text on websites. Researchers have already demonstrated this by influencing chatbot responses with hidden text. This raises concerns about potential abuse, such as manipulating chatbots to think someone has a longer resume or hiding criminal records. It's essential to remember that chatbot responses are the result of complex processes, some of which may be manipulative. The AI industry is already witnessing this cat-and-mouse game between companies and manipulators. Users should be aware that chatbot responses may have been influenced behind the scenes and should approach them with a critical mind.
The Telegram Problem + Gov. Kathy Hochul on School Phone Bans + Kevin's A.I. Reputation Rehab
Recent Episodes from Hard Fork
OpenAI's Reasoning Machine + Instagram Teen Changes + Amazon RTO Drama
Last week, OpenAI released a preview of its hotly anticipated new model, o1. We discuss what it has excelled at and how it could accelerate the timeline for building superintelligence. Then, we explain why Meta is making teenagers’ Instagram accounts private by default. And, finally, we chat with the New York Times reporter Karen Weise about why Amazon is forcing its corporate employees to go back to working in the office five days a week and whether other companies will follow suit.
Guests:
- Karen Weise, a technology correspondent for The Times.
Additional Reading:
- OpenAI Unveils New ChatGPT That Can Reason Through Math and Science
- Instagram, Facing Pressure Over Child Safety Online, Unveils Sweeping Changes
- Amazon Tells Corporate Workers to Be Back in the Office 5 Days a Week
We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.
Do You Need a New iPhone? + Yuval Noah Harari’s A.I. Fears + Hard Fork Crimes Division
Apple unveiled its latest gadgets at its big September event on Monday. We discuss the most interesting new features — including AirPods that can function as hearing aids and Apple Watch software that can help detect sleep apnea — and offer our advice on when to buy a new iPhone. Then, the best-selling author Yuval Noah Harari joins us to discuss his new book and his biggest fears about A.I. And finally, we crack open some criminal cases in a new segment we’re calling the Hard Fork Crimes Division. We’ll explain how one man made $10 million by manipulating music streaming services and how online instructions for building a 3D-printed gun have ended up in the hands of criminals around the world.
Guest:
- Yuval Noah Harari, author of “Sapiens,” “Homo Deus” and “Nexus.”
Additional Reading:
- Apple Unveils New iPhones With Built-In Artificial Intelligence
- Russia Secretly Worms Its Way Into America’s Conservative Media
- He’s Known as ‘Ivan the Troll.’ His 3D-Printed Guns Have Gone Viral.
- The Bands and the Fans Were Fake. The $10 Million Was Real.
We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.
X Gets a Brazilian Ax + Founder Mode + Listeners Respond on School Phone Bans
Over the weekend, X was banned in Brazil. We talk with The New York Times’s Brazil bureau chief, Jack Nicas, about how Brazilians are reacting, whether its owner, Elon Musk, has made a business miscalculation and what this means for free speech around the world. Then, we’re going “founder mode.” We explore why an essay about start-up founders reclaiming their authority went viral and what that tells us about how Silicon Valley thinks about power. And finally, we hear from listeners. Teachers and students left us voice messages describing how phone bans in schools are transforming their lives.
Guest:
- Jack Nicas, Brazil bureau chief for The Times
Additional Reading:
We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.
The Telegram Problem + Gov. Kathy Hochul on School Phone Bans + Kevin's A.I. Reputation Rehab
Telegram’s founder, Pavel Durov, was arrested in France and charged with several crimes connected to his operation of the platform. We’ll tell you what the charges against him mean for the internet. Then Gov. Kathy Hochul, Democrat of New York, joins us to discuss why she wants to ban phones statewide in public schools. And finally, Kevin has been using secret codes to try to change what A.I. chatbots think of him. We get to the bottom of whether it is possible to manipulate A.I. outputs.
This episode contains discussion of suicide connected to youth mental health. If you are having thoughts of suicide, call or text 988 to reach the 988 Suicide and Crisis Lifeline or go to SpeakingOfSuicide.com/resources for a list of additional resources.
Guest:
- Kathy Hochul, governor of New York
Additional Reading:
- How Pavel Durov, Telegram’s Founder, Went From Russia’s Mark Zuckerberg to Wanted Man
- Kathy Hochul’s ‘Big’ Plan to Ban Phones in Schools
We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.
What Happened to the A.I. Election? + ChatGPT for Mayor + The Productivity Tools We’re Using
This week, we discuss why so few campaigns seem to be experimenting with A.I. The Times’s Sheera Frenkel joins us with examples of the many different artificial intelligence products that have been turned down by campaigns in this election cycle, from A.I.-generated endorsements from long-dead historical figures to a synthetic version of Donald Trump. Then, we interview the Wyoming man who ran for mayor on the promise that he would exclusively use a customized ChatGPT bot to run the city. And finally, it’s time for a tech check. We run down the apps we’re using to become more productive.
Guest:
- Sheera Frenkel, a Times reporter covering technology
- Victor Miller, former candidate for mayor in Cheyenne, Wyoming
Additional Reading:
- The Year of the A.I. Election That Wasn’t
- Mayoral Candidate Vows to Let VIC, an AI Bot, Run Wyoming’s Capital City
- Three Apps That Made Me More Productive This Year
We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.
Can Musk Get Trump Elected? + Steve Ballmer’s Quest for the Facts + This Week in A.I.
This week, we debate whether Elon Musk’s recent stumping and fund-raising for former President Trump could help him get re-elected. Then, former Microsoft’s chief executive, Steve Ballmer, stops by to discuss his effort to depolarize our politics using government data. And finally, This Week in A.I. returns: We run down some of the biggest recent stories that caught our attention.
Guest:
- Steve Ballmer, former chief executive of Microsoft, founder of USAFacts
Additional Reading:
- Inside Donald Trump and Elon Musk’s Growing Alliance
- The American Right Is Terminally Online
- The New Home of the L.A. Clippers Is a Hot Ticket for Art
- A California Bill to Regulate A.I. Causes Alarm in Silicon Valley
We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.
Google’s Monopoly Money + Is the A.I. Bubble Popping? + The Hot-Mess Express
This week, a federal judge ruled that Google acted illegally to maintain a monopoly in online search. David McCabe, a New York Times reporter, joins to discuss what happens next. Then, are we in an A.I. bubble? We weigh in on the wild market swings that started the week and consider the argument that A.I. is overhyped. And finally, it’s time for our new segment: We bat around some of the weirdest recent tech drama — including a MrBeast competition that went awry and a founder who dropped a diss track aimed at a rival. All aboard the Hot-Mess Express.
Guest:
- David McCabe, a Times reporter covering technology policy.
Additional Reading:
- ‘Google Is a Monopolist,’ Judge Rules in Landmark Antitrust Case
- Tech Bosses Preach Patience as They Spend and Spend on A.I.
- What’s Behind All the Stock Market Drama?
- Willing to Die for MrBeast (and $5 Million)
We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.
The Zoom Election + Google DeepMind's Math Olympiad + HatGPT! Olympics Edition
This week, with hundreds of thousands of people joining online political rallies for Kamala Harris, we discuss whether 2024 is suddenly becoming the Zoom election, and what that means for both parties’ political organizing. Then, Pushmeet Kohli, a computer scientist at Google DeepMind, joins us for a conversation about how his team’s new A.I. models just hit a silver medal score on the International Mathematical Olympiad exam. And finally, it’s time for a new round of HatGPT! This time, it’s a special Olympics tech edition.
Guest:
- Pushmeet Kohli, vice president of research at Google DeepMind
Additional Reading:
- Liberal “White Dudes” Rally for Harris: “It’s Like a Rainbow of Beige”
- Move Over, Mathematicians, Here Comes AlphaProof
- Now Narrating the Olympics: A.I.-Al Michaels
We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTubeand TikTok.
Kamala Harris’s Bratty Coconut Memescape + What Does $1,000 a Month Do? + The Empire CrowdStrikes Back
This week, the memes didn’t just fall out of coconut trees — a rundown of the social media reaction to Kamala Harris’s election campaign, and an exploration of what her tech platform might look like. Then we discuss a major new study on universal basic income with Elizabeth Rhodes, research director at OpenResearch, and ask whether it could be a solution to job losses to A.I. And finally, Kate Conger, a New York Times reporter, joins us to break down how the cybersecurity company CrowdStrike crashed the global IT infrastructure.
Guests:
- Elizabeth Rhodes, Research Director at OpenResearch
- Kate Conger, New York Times reporter
Additional Reading:
- What is the KHive?
- Is It Silicon Valley’s Job to Make Guaranteed Income a Reality?
- OpenResearch Unconditional Cash Study
- When Tech Fails, It Is Usually With a Whimper Instead of a Bang
We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.
Social Media Reacts to an Attempted Assassination + Tech Elites for Trump + TikTok's Jawmaxxing Trend
This week, an assassination attempt for the social media age: what the platforms got right and wrong in the chaotic aftermath. Then we talk with the Times reporter Teddy Schleifer from this week’s Republican National Convention in Milwaukee about the wave of Silicon Valley billionaires stepping up to back Trump. And finally, we talk to The Times’s Styles reporter Callie Holtermann about facial fitness gum, a “jawmaxxing” product targeted at teen boys online.
Guests:
- Theodore Schleifer, New York Times reporter
- Callie Holtermann, New York Times reporter
Additional Reading:
- An Assassination Attempt for the Social Media Age
- How a Network of Tech Billionaires Helped J.D. Vance Leap Into Power
- Why Are Gen Z Boys Chewing on Rock-Hard Gum?
We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.