Logo
    Search

    2020, China, Face Recognition, and DeepFakes

    enAugust 14, 2020

    Podcast Summary

    • AI systems struggle to adapt to changing circumstancesFacial recognition tech misidentifies masked faces, AI may misclassify scenes, need for diverse datasets to improve AI's adaptability, creating such datasets may be challenging

      AI systems are facing challenges in adapting to the unprecedented changes brought about by the year 2020. As people have had to adjust to wearing masks, working from home, and other new realities, AI systems have struggled to keep up. For instance, facial recognition technology has encountered difficulties distinguishing faces with masks from those without, leading to potential inaccuracies. Additionally, AI systems may misclassify scenes, such as mistaking a home office for a playground due to the presence of children in the background. These issues highlight the need for more robust and diverse datasets to improve AI's ability to adapt to changing circumstances. However, creating such datasets may be challenging due to limited access to diverse communities and subjects. Overall, this discussion underscores the importance of continued research and development in AI to help it better adapt to our ever-evolving world.

    • Ensuring AI accessibility and unbiasedness during a pandemicAI technologies, like facial recognition, must adapt to masks and protective measures for inclusivity. While AI can aid in discovering new compounds, it can't expedite vaccine development which requires extensive testing and clinical trials.

      While AI technologies, such as facial recognition, are becoming more prevalent, it's important to ensure they are accessible and unbiased for all individuals. This includes adjusting AI to work effectively with masks and other protective measures. Additionally, while AI can aid in the discovery of new compounds like halicin, it cannot magically deliver a coronavirus vaccine. The development of a vaccine requires extensive testing and clinical trials, which AI cannot expedite. Another topic related to 2020 is the use of facial recognition by FEMA. The article "The Panopticon is already here" highlights the extensive use of facial recognition in China and the vast amount of data being collected and processed. It's crucial to be aware of the power and reach of these technologies, as well as their limitations. AI can assist in many areas, but it cannot replace human expertise and judgment, especially in complex and critical processes like vaccine development.

    • China's Surveillance State: Tech Companies and the Government's Close CollaborationChina's use of advanced tech for surveillance, driven by gov-tech collaboration and centralized data, contrasts with US's privacy laws and siloed data. Trump admin policies may accelerate Chinese development of surveillance tech, but privacy tools like Fox offer some protection.

      China's advanced use of technology for surveillance, particularly facial recognition, is driven by the close collaboration between the government and tech companies, as well as the centralization of data around individuals' identities. This contrasts with the US, where privacy laws and the separation of tech companies from the government result in more siloed and messy data. The article also highlights the potential impact of the Trump administration's policies on Chinese talent and research in AI, which could lead to more development of surveillance technology in China. In response, there are emerging privacy tools, such as the AI application Fox, which aims to protect individuals from facial recognition technology. Overall, it's important to be aware of these trends and their implications for privacy and security.

    • University of Chicago researchers develop software to bypass facial recognitionResearchers created 'CleverHans' software to alter images, bypassing facial recognition. It's free, downloaded 100,000 times, but concerns arise over limited impact and need for user-friendly solutions.

      Researchers at the University of Chicago have developed a software called "CleverHans" that can alter images to bypass facial recognition systems. This technology, known as cloaking, makes tiny changes to pixels in photos, making it difficult for state-of-the-art facial recognition algorithms to identify individuals. The software has been downloaded over 100,000 times and is free for both Windows and Mac users. However, the article raises concerns about the limited impact of this technology given the vast number of labeled images already in existence. It also highlights the need for easier-to-use solutions and the potential for defensive and offensive applications. The 100% successful rate claim made by the researchers is also questioned, as it could mislead users and create false expectations. Overall, this technology marks an important step in the ongoing battle between privacy and advanced facial recognition systems. However, it requires wider adoption and user-friendly solutions to make a significant impact.

    • Demonstrating vulnerabilities in facial recognition technologyFacial recognition technology, while convenient, is not infallible and can be manipulated or biased, requiring human oversight.

      While advancements in technology such as facial recognition offer convenience and efficiency, they also come with vulnerabilities and biases. A team from McCarthy recently demonstrated this by releasing a method to fool facial recognition systems into thinking someone else is the actual person. This technique involves subtly altering an image to confuse the system. McAfee, a company typically known for firewall software, is also exploring this area, raising awareness of these inherent vulnerabilities. On the other hand, activists like Joy Buolamwini have highlighted the gender and race disparities in facial recognition technology, leading to moratoriums on its sale by companies like Amazon, Microsoft, and IBM. It's crucial to remember that these AI systems are not infallible and can be manipulated or biased, emphasizing the importance of human oversight.

    • Addressing Biases and Harms in AI with the Algorithmic Vulnerability Boundary ProjectThe Algorithmic Vulnerability Boundary Project aims to address biases and harms caused by AI by incentivizing individuals and organizations to report issues, particularly impacting historically marginalized communities.

      The use of AI is expanding rapidly, with new initiatives like the Algorithmic Vulnerability Boundary Project aiming to address biases and harms caused by AI. This project, which incentivizes individuals and community-based organizations to report such issues, is particularly important as AI becomes more prevalent and as historically marginalized communities are disproportionately affected. Meanwhile, the ease and affordability of creating deep fakes are increasing, posing new challenges in the realm of misinformation. These developments underscore the importance of staying informed and vigilant as AI continues to shape our world. The Algorithmic Vulnerability Boundary Project represents a step in the right direction towards ensuring that AI benefits everyone and does not perpetuate harm. The increasing accessibility of deep fake technology is a reminder of the need to be prepared for the potential misuse of AI.

    • Deep fakes in text form are a growing concernDeep fake text, generated by advanced AI models, can produce human-like text that's hard to distinguish from real text, posing significant challenges to authenticity and truth in information.

      Deep fakes, which include images, videos, and text, pose significant challenges to authenticity and truth in information. While there have been efforts to detect deep fake images and videos, deep fake text is much harder to identify due to the absence of clear artifacts or comparison points. Deep fake text, generated by advanced AI models like GPD3, can produce human-like text that is difficult to distinguish from real text. This is particularly concerning because synthetic text can be generated in large volumes, potentially flooding the information landscape with manipulated information. The article "AI generated text is the scariest deep fake of all" from Wired highlights this issue, emphasizing the need for increased awareness and detection methods for deep fake text. It's important to note that while GPT-3, the AI model mentioned in the article, might produce text that is similar to what a human says, it might still be possible to identify subtle differences or inconsistencies with careful analysis. Overall, the ability to generate convincing deep fakes, whether in the form of images, videos, or text, underscores the importance of critical thinking, fact-checking, and media literacy in our increasingly digital world.

    • Understanding GPT-3's limitationsWhile GPT-3 generates impressive text, it lacks the ability to consistently produce complex and substantial thought. Be aware of its limitations and not overhype its capabilities.

      While GPT-3, a powerful AI model from OpenAI, can generate human-like text and impress many, it still has significant limitations. The model may produce plausible responses at a high level, but lacks the ability to consistently produce complex and substantial thought. It's important to be aware of these limitations and not overhype the capabilities of GPT-3 or similar AI models. OpenAI's CEO, Sam Altman, has also warned against the hype surrounding GPT-3 and emphasized that it is just an early glimpse into the potential of AI. While impressive demonstrations of AI capabilities can be misleading, it's essential to remember that there are still many challenges to be addressed before we can achieve truly human-like AI. It's a reminder to stay informed and not be blinded by the hype.

    Recent Episodes from Last Week in AI

    #172 - Claude and Gemini updates, Gemma 2, GPT-4 Critic

    #172 - Claude and Gemini updates, Gemma 2, GPT-4 Critic

    Our 172nd episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    Last Week in AI
    enJuly 01, 2024

    #171 - - Apple Intelligence, Dream Machine, SSI Inc

    #171 - - Apple Intelligence, Dream Machine, SSI Inc

    Our 171st episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 24, 2024

    #170 - new Sora rival, OpenAI robotics, understanding GPT4, AGI by 2027?

    #170 - new Sora rival, OpenAI robotics, understanding GPT4, AGI by 2027?

    Our 170th episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 09, 2024

    #169 - Google's Search Errors, OpenAI news & DRAMA, new leaderboards

    #169 - Google's Search Errors, OpenAI news & DRAMA, new leaderboards

    Our 168th episode with a summary and discussion of last week's big AI news!

    Feel free to leave us feedback here: https://forms.gle/ngXvXZpNJxaAprDv6

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 03, 2024

    #168 - OpenAI vs Scar Jo + safety researchers, MS AI updates, cool Anthropic research

    #168 - OpenAI vs Scar Jo + safety researchers, MS AI updates, cool Anthropic research

    Our 168th episode with a summary and discussion of last week's big AI news!

    With guest host Gavin Purcell from AI for Humans podcast!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enMay 28, 2024

    #167 - GPT-4o, Project Astra, Veo, OpenAI Departures, Interview with Andrey

    #167 - GPT-4o, Project Astra, Veo, OpenAI Departures, Interview with Andrey

    Our 167th episode with a summary and discussion of last week's big AI news!

    With guest host Daliana Liu (https://www.linkedin.com/in/dalianaliu/) from The Data Scientist Show!

    And a special one-time interview with Andrey in the latter part of the podcast.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 19, 2024

    #166 - new AI song generator, Microsoft's GPT4 efforts, AlphaFold3, xLSTM, OpenAI Model Spec

    #166 - new AI song generator, Microsoft's GPT4 efforts, AlphaFold3, xLSTM, OpenAI Model Spec

    Our 166th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 12, 2024

    #165 - Sora challenger, Astribot's S1, Med-Gemini, Refusal in LLMs

    #165 - Sora challenger, Astribot's S1, Med-Gemini, Refusal in LLMs

    Our 165th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 05, 2024

    #164 - Meta AI, Phi-3, OpenELM, Bollywood Deepfakes

    #164 - Meta AI, Phi-3, OpenELM, Bollywood Deepfakes

    Our 164th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enApril 30, 2024

    #163 - Llama 3, Grok-1.5 Vision, new Atlas robot, RHO-1, Medium ban

    #163 - Llama 3, Grok-1.5 Vision, new Atlas robot, RHO-1, Medium ban

    Our 163rd episode with a summary and discussion of last week's big AI news!

    Note: apology for this one coming out a few days late, got delayed in editing it -Andrey

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enApril 24, 2024

    Related Episodes

    DeepNude Bot, Tesla Full Self Driving, Google AI US-Mexico Border

    DeepNude Bot, Tesla Full Self Driving, Google AI US-Mexico Border

    This week Automating Image Abuse: deepfake bots on Telegram, Activists Turn Facial Recognition Tools Against the Police, Tesla is putting ‘self driving’ in the hands of drivers amid criticism the tech is not ready, Google AI tech will be used for virtual border wall, CBP contract shows

    0:00 - 0:40 Intro 0:40 - 5:40 News Summary segment 5:40 News Discussion segment

    Find this and more in our text version of this news roundup:  https://www.skynettoday.com/digests/the-eighty-eighth

    Music: Deliberate Thought, Inspired by Kevin MacLeod (incompetech.com)

    Listen Again: Warped Reality

    Listen Again: Warped Reality
    Original broadcast date: October 30, 2020. False information on the internet makes it harder and harder to know what's true, and the consequences have been devastating. This hour, TED speakers explore ideas around technology and deception. Guests include law professor Danielle Citron, journalist Andrew Marantz, and computer scientist Joy Buolamwini.

    Learn more about sponsor message choices: podcastchoices.com/adchoices

    NPR Privacy Policy

    5. Access Denied - Tech at the Border

    5. Access Denied - Tech at the Border

    Shownotes Episode 5

    Transcript available HERE
    Content Note: This episode mentions children in immigration detention and residential schools.

    Borders have long been sites of colonial enforcement about who can come and go and how Indigenous peoples are treated. Canada is no exception. Increasingly, governments look to technology to make potentially life-or-death decisions about whether a person fleeing danger should be allowed to cross a border. What happens when that technology reinforces bias and makes unreliable choices? 

    Author and activist Harsha Walia leads us through how borders came to exist and how Canada has used them to keep out "undesirables" from the country with tools like the Safe Third Country Agreement. We connect with Joy Henderson, an Afro-Indigenous person whose own experience with the oppressive power of borders means they're unable to claim status in Canada. We chat with Jamie Duncan, a PhD student and researcher about how deploying artificial intelligence at the border can reinforce systemic racism in some disturbing ways. And finally, Petra Molnar, a researcher who authored a report with Citizen Lab at the Unversity of Toronto, explains her work on how the border is often a test site for invasive surveillance technologies on asylum seekers.

    Take Action

    Further Resources

    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    What's In A Face

    What's In A Face
    We think our faces are our own. But technology can use them to identify, influence and mimic us. This week, TED speakers explore the promise and peril of turning the human face into a digital tool. Guests include super recognizer Yenny Seo, Bloomberg columnist Parmy Olson, visual researcher Mike Seymour and investigative journalist Alison Killing.

    Learn more about sponsor message choices: podcastchoices.com/adchoices

    NPR Privacy Policy

    Clearview AI in the Capitol, Medical AI Regulation, DeepFake Text

    Clearview AI in the Capitol, Medical AI Regulation, DeepFake Text

    This week:

    0:00 - 0:35 Intro 0:35 - 4:30 News Summary segment 4:30 News Discussion segment

    Find this and more in our text version of this news roundup:  https://lastweekin.ai/p/99

    Music: Deliberate Thought, Inspired by Kevin MacLeod (incompetech.com)