Logo

    Black Box: episode 2 – The hunt for ClothOff, the deepfake porn app

    en-gbAugust 27, 2024
    What challenges do lawmakers face in regulating deep fake technology?
    How can digital literacy protect users from deep fake threats?
    What incident involved the misuse of the cloth app in Spain?
    Who is Sergey Cherumusch and what did he discover?
    What ethical dilemmas arise from the misuse of AI apps?

    Podcast Summary

    • Deep Fakes RegulationRegulating deep fakes is challenging due to technological advancements, potential for humiliation and bullying, and the need for digital literacy and education.

      The use of deep fake technology, like the app "cloth," is becoming increasingly difficult to regulate and can be used to create falsified images and videos that can humiliate and bully individuals, particularly young women. The Guardian's investigation into the creators of this app revealed the challenges lawmakers face in controlling such technologies and the potential consequences of their misuse. The episode also highlights the importance of digital literacy and education in protecting oneself from such threats. The story of Miriam Aladeeb, a gynecologist from a small Spanish town, serves as a cautionary tale about the potential harm of deep fakes and the need for greater awareness and action to combat their spread.

    • AI privacy violations, ethical dilemmasMisuse of AI apps like Clothoff can lead to serious privacy violations and ethical dilemmas, as seen in El Mendralejo, Spain, where boys created and shared fake naked images of classmates, causing distress and fear. Reporting incidents, seeking support, stronger laws, and ongoing dialogue are crucial to address these challenges.

      The misuse of artificial intelligence apps, such as Clothoff, can lead to serious violations of privacy and safety for young girls. In El Mendralejo, Spain, a group of boys used this app to create and share fake naked images of classmates, causing widespread distress and fear. The incident highlights the ethical dilemmas and legal complexities that arise when technology outpaces moral and ethical development. The case also underscores the importance of reporting such incidents and seeking support, as well as the need for stronger laws and regulations to protect children from the potential harms of AI. The investigation into this case marked new territory for Spanish law enforcement, raising questions about how to punish and rehabilitate the perpetrators while also holding the creators of the app accountable. Ultimately, the incident serves as a reminder of the need for ongoing dialogue and action to address the challenges and risks posed by the intersection of technology and human behavior.

    • Cloth Off app creatorsDespite claiming innocence, Cloth Off app creators exploit teen girls using anonymous channels, hide identity, and justify actions by providing 'positive emotions' and 'not being ashamed of nudity', causing significant harm to victims and their families.

      The creators of the app "Cloth Off," which allows users to remove clothing from images using AI, present themselves as an innocent and harmless entity, but the reality is much darker. The app, which has gained a large following, is used to target and exploit teenage girls, with the creators using anonymous channels to communicate with customers and evade identification. The app's website is carefully designed to hide the identity of the creators, and they justify their actions by claiming they are giving users "positive emotions" and helping them "not be ashamed of nudity." However, the emotional impact on the victims and their families is significant, and it's crucial to shed light on the human beings behind the app, causing this chaos and profiting from it. It's essential to continue investigating and raising awareness about such apps and the harm they can cause.

    • Creators of harmful apps hiding their identitiesThe creators of harmful apps may go to great lengths to hide their identities, making it challenging for investigators to hold them accountable for their actions.

      The creators of the "cloth-off" app, which aimed to help people feel comfortable with their bodies, were difficult to identify and seemed to go to great lengths to hide their true intentions. When investigators visited the registered address of the company, they found it was a virtual office, making it nearly impossible to uncover the individuals behind the app. Furthermore, they discovered that the app was linked to another website, "AI image craft," which suggested the same people were involved in both projects. The creators' efforts to hide their identities and the nature of their business highlight the challenges in holding individuals accountable for potentially harmful online content.

    • Online DeceptionOnline businesses can be fraudulent and disguise illegal activities, while deep fakes pose significant threats to individuals' privacy and consent. Stay informed and question the legitimacy of online businesses and be cautious of deep fakes.

      The internet can be a labyrinth of deceit and illusion, with fake businesses and deep fakes posing significant threats. In this particular case, the discovery of Texture Oasis, a seemingly legitimate architectural materials business, was revealed to be a sham used to disguise payments to another website, Cloth Off. This business was found to be involved in creating and sharing deep fakes of women and girls without their consent. The investigation also uncovered the use of a fake employee named Rick Ellis, whose identity was stolen and used to add legitimacy to Texture Oasis. As governments begin to take action against non-consensual deep fakes, it's essential to remain vigilant and aware of the potential dangers lurking online. In the end, the elusive Cloth Off managed to evade detection for a while, but their eventual unmasking serves as a reminder of the importance of staying informed and questioning what we see on the internet.

    • Business association with suspicious entityUnintended association with a suspicious business can potentially harm a company's reputation. It's crucial to investigate and address such situations promptly to mitigate any negative impact.

      During an interview with Sergey Cherumusch, a Russian entrepreneur, it was revealed that his company, B.S. Europe, had unknowingly been associated with a suspicious business named Klothov. Although they had never started any work for Klothov, the company's name had been used on their website. Initially, Sergey seemed uninterested and relaxed about the situation. However, when he was informed of the serious nature of the investigation and the potential negative implications for his business, he offered to help in finding out more information about Klothov. Subsequently, he provided emails and screenshots of conversations between his company and Klothov's representatives. The most intriguing discovery was the identity of Klothov's founder, who went by the name "Al" on Telegram, and had been posting videos from around the world without revealing his face. This finding led the team to further investigate the elusive figure behind Klothov.

    • Online Identity InvestigationThorough research and investigation can unmask the people behind online entities, leaving a consistent online presence is important to avoid detection.

      The investigation into the mysterious cloth-off app led the journalists to uncover the identities of its potential founders, Dasha Babicheva and Alexander Babichow, through a combination of social media and public records. The discovery of matching photos, names, and locations across different platforms provided strong evidence that these individuals were involved in the operation of the app. Despite initial denials, Babichow eventually admitted to being a part of cloth-off when confronted by the journalists. This case illustrates the power of thorough research and investigation in unmasking the people behind online entities. It also highlights the importance of maintaining a consistent online presence, as every post and profile can potentially leave a trace that can be traced back in time.

    • Deepfake TechnologyDeepfake technology is advancing rapidly and causing significant harm through the production and distribution of deepfake pornography and other manipulated content, requiring updated laws and efforts to identify and prosecute those involved.

      The app Clothoff, which produces and distributes deepfake pornography, is a sophisticated operation with a network of people likely involved, and it's just one of many similar apps emerging in the market. The individuals behind these apps go to great lengths to stay hidden, but their harm to individuals and society is significant. Deepfake technology is advancing rapidly, and it's not just limited to pornography. It can also create fake videos of politicians, audio of voices, and even atrocities that look real but never happened. The implications for truth and authenticity are concerning. Miriam, who shared her story of her daughter's experience with Clothoff, expressed relief at seeing names attached to the app but wished the people involved knew the harm they were causing. She urged that we need to update laws and keep going after those behind these apps to make their lives as difficult as possible. The wave of deepfake content is growing, and it's crucial that we're prepared to distinguish fact from fiction.

    • Deepfake investigationThe controversial deepfake account on Telegram denied creating under-18 images and using the app for deepfakes in Spain or New Jersey, but payments were linked to a company they claimed no relation to, and MasterCard prohibited payments for deepfake content on their network.

      The person behind the controversial cloth-off press account on Telegram denied using their app for creating deepfake images in Spain or New Jersey, and claimed it's impossible to create under-18 images using their technology. However, payments to their website were directed to accounts linked to Texture Oasis, a company they claimed no relation to. MasterCard has since prohibited payments for deepfake content on their network. The account operator also denied any connection to Alexander Babachow and Dasha Babicheva, who also declined to respond to detailed questions. The investigation was produced by Alex Atack, with additional reporting and translation support from various team members. The Guardian continues to provide guidance against non-consensual deepfake content.

    Recent Episodes from Today in Focus