Logo
    Search

    False Facial Recognition, Biased AI Drama, and Neo-Phrenology

    enJuly 04, 2020

    Podcast Summary

    • False facial recognition match leads to wrongful arrestLack of standards and oversight in facial recognition tech use by law enforcement can result in wrongful arrests due to inaccurate matches and reliance on untested software.

      The use of facial recognition technology by law enforcement without proper testing and regulation can lead to wrongful arrests. In this episode of Let's Talk AI, PhD students Adri Krenkov and Sharon discussed two articles from the New York Times and Chords about a man named Robert Julian Borschak William who was arrested based on a false facial recognition match. The company that sold the technology to the police department, DataWorks Plus, hadn't developed official mission software or tested the software for accuracy. Instead, they relied on third-party software and didn't release specific accuracy information. After the software identified Williams, officers showed his picture to a security guard who picked him out of a photo array, leading to an arrest warrant. This raises concerns about the lack of standards and oversight in the use of facial recognition technology by law enforcement. Sharon found the situation alarming and dystopian, as Detroit police policy explicitly forbids arrests based solely on facial recognition matches. However, a loophole allowed officers to present a photo array to a third party, leading to Williams' arrest. This highlights the importance of regulating and testing facial recognition technology before it is used in law enforcement applications.

    • Case of Mr. Williams highlights the risks of relying on facial recognition in law enforcementFacial recognition technology in law enforcement can lead to inaccurate results, causing harm and potential bias, requiring responsible use and oversight.

      Relying solely on facial recognition technology to make arrests can lead to troubling and inaccurate results, as demonstrated in the case of Mr. Williams. The human impact of these errors can be significant, including missed work and public embarrassment. Furthermore, studies have shown that these systems can be biased against certain ethnicities, leading to even more false accusations. Until these systems can be used responsibly and with proper oversight, their use in law enforcement may do more harm than good. The case of Mr. Williams, where he was arrested based on a faulty facial recognition match and without proper explanation, highlights the need for greater scrutiny and transparency in the use of these technologies.

    • AI use in law enforcement raises concerns about misuse and biasResponsible AI research and development is crucial to prevent potential risks and minimize harm. Ethical use of AI is essential to ensure technology benefits everyone, regardless of race, gender, or other factors.

      The use of facial recognition technology in law enforcement, as seen in the Detroit case, raises concerns about potential misuse and bias. To mitigate this, some argue that the technology should only be used for violent crimes, as opposed to less serious offenses. However, even in cases of violent crimes, a false accusation can have severe consequences. To prevent such incidents, it's crucial that researchers, developers, and industry professionals take responsibility for the ethical use of AI. The second story highlights the issue of AI bias, as demonstrated by a machine learning tool called Pulse. This tool, designed to enhance photos, can produce biased results, particularly when it comes to outputting images of people of color or women. The incident serves as a reminder that researchers need to be mindful of the potential downstream consequences of their work and take steps to mitigate bias. Both stories underscore the importance of responsible AI research and development. As AI becomes more prevalent in our society, it's essential that those involved in its creation and deployment are aware of the potential risks and take steps to minimize harm. By fostering a culture of ethical AI use, we can help ensure that technology benefits everyone, regardless of race, gender, or other factors.

    • The importance of addressing bias in machine learning systemsRecognizing the connection between biased data and biased ML systems, dealing with bias in both research and deployed products, and implementing proper norms and guidelines are crucial to mitigate the impact of biased data and systems in AI.

      The recent discussion surrounding bias in machine learning systems, sparked by a controversial image in an academic paper, highlighted the importance of recognizing the connection between biased data and biased ML systems. The conversation, which took place primarily on Twitter, showcased the need for more nuanced discussions on this complex topic and the responsibility of researchers to address bias in their work. Jan LeCun, Facebook's chief AI scientist, weighed in on the conversation, emphasizing the significance of dealing with bias in deployed products rather than just in academic papers. However, many researchers argued that the two are interconnected and that addressing bias in research is crucial. The conversation also underscored the importance of proper norms and guidelines in the research community and the potential limitations of using Twitter for nuanced discussions. Ultimately, the incident served as a reminder of the need for continuous reflection and improvement in the field of AI to mitigate the impact of biased data and systems.

    • Emphasizing Transparency and Ethics in Machine LearningResearchers call for transparency and ethical considerations in machine learning, including creating model cards and acknowledging potential biases, while a coalition of experts calls for a halt to research on criminality-predicting algorithms due to racial bias and flawed methods.

      Transparency and ethical considerations are crucial when developing and deploying machine learning models. Researchers Margaret Mitchell and Timnit Gebru emphasized the importance of acknowledging potential biases and limitations in models, as well as providing clear explanations and warnings. This includes creating a one-page summary, or model card, outlining the data set, training process, objective, and potential biases. The community should adopt this practice to ensure that users are aware of any caveats or issues. In a related development, a coalition of AI experts called for a halt to research on algorithms claiming to predict criminality, citing racial bias and flawed methods. These events underscore the need for ethical guidelines and transparency in machine learning research and applications.

    • AI Ethics: Reflecting on Power Structures and Potential OppressionsExperts raised ethical concerns and scientific flaws regarding a paper claiming to predict criminality using facial recognition with no racial bias. The importance of reflecting on power structures and potential oppressions in AI research was emphasized.

      The scientific community and the public are increasingly scrutinizing the ethical implications and the validity of AI research, particularly in areas like facial recognition and criminality prediction. A recent example involved a paper published by Springer, which claimed to predict criminality using facial recognition with an 80% accuracy and no racial bias. However, the paper was met with opposition from experts who signed a letter asking Springer to retract it due to ethical concerns and scientific flaws. The letter emphasized the importance of researchers reflecting on the power structures and potential oppressions that make their work possible. Meanwhile, the US administration's decision to suspend work visas for foreign workers, including those in AI, could potentially threaten the diversity and innovation in the field. These events underscore the need for ongoing dialogue and ethical considerations in AI research.

    • Impact of US Policy Changes on International AI StudentsRecent US policy changes restricting work visas for international students have raised concerns within the tech community about the potential impact on AI research and collaboration. Many fear long-term damage to the field, with approximately two-thirds of PhD students being international and many staying in the US after graduation.

      International students make up a significant portion of PhD students in top AI programs in the US, with approximately two-thirds being international. These students often stay in the US after graduation, contributing to the AI field. However, recent policy changes, such as restrictions on work visas, have raised concerns within the tech community about the potential impact on AI research and collaboration. Many in the industry, including the speaker, have expressed concern about the long-term implications of these policies. The speaker, who is an immigrant herself, also shared her personal experience of moving to the US with an H1B visa. She hopes that the outcry from the tech community and others will lead to a temporary suspension of these policies and that they will not cause lasting damage to the AI field. The speaker also speculated that the policies may be motivated by political considerations. Overall, the discussion underscores the importance of international collaboration and the potential consequences of limiting it.

    Recent Episodes from Last Week in AI

    #172 - Claude and Gemini updates, Gemma 2, GPT-4 Critic

    #172 - Claude and Gemini updates, Gemma 2, GPT-4 Critic

    Our 172nd episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    Last Week in AI
    enJuly 01, 2024

    #171 - - Apple Intelligence, Dream Machine, SSI Inc

    #171 - - Apple Intelligence, Dream Machine, SSI Inc

    Our 171st episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 24, 2024

    #170 - new Sora rival, OpenAI robotics, understanding GPT4, AGI by 2027?

    #170 - new Sora rival, OpenAI robotics, understanding GPT4, AGI by 2027?

    Our 170th episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 09, 2024

    #169 - Google's Search Errors, OpenAI news & DRAMA, new leaderboards

    #169 - Google's Search Errors, OpenAI news & DRAMA, new leaderboards

    Our 168th episode with a summary and discussion of last week's big AI news!

    Feel free to leave us feedback here: https://forms.gle/ngXvXZpNJxaAprDv6

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 03, 2024

    #168 - OpenAI vs Scar Jo + safety researchers, MS AI updates, cool Anthropic research

    #168 - OpenAI vs Scar Jo + safety researchers, MS AI updates, cool Anthropic research

    Our 168th episode with a summary and discussion of last week's big AI news!

    With guest host Gavin Purcell from AI for Humans podcast!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enMay 28, 2024

    #167 - GPT-4o, Project Astra, Veo, OpenAI Departures, Interview with Andrey

    #167 - GPT-4o, Project Astra, Veo, OpenAI Departures, Interview with Andrey

    Our 167th episode with a summary and discussion of last week's big AI news!

    With guest host Daliana Liu (https://www.linkedin.com/in/dalianaliu/) from The Data Scientist Show!

    And a special one-time interview with Andrey in the latter part of the podcast.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 19, 2024

    #166 - new AI song generator, Microsoft's GPT4 efforts, AlphaFold3, xLSTM, OpenAI Model Spec

    #166 - new AI song generator, Microsoft's GPT4 efforts, AlphaFold3, xLSTM, OpenAI Model Spec

    Our 166th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 12, 2024

    #165 - Sora challenger, Astribot's S1, Med-Gemini, Refusal in LLMs

    #165 - Sora challenger, Astribot's S1, Med-Gemini, Refusal in LLMs

    Our 165th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 05, 2024

    #164 - Meta AI, Phi-3, OpenELM, Bollywood Deepfakes

    #164 - Meta AI, Phi-3, OpenELM, Bollywood Deepfakes

    Our 164th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enApril 30, 2024

    #163 - Llama 3, Grok-1.5 Vision, new Atlas robot, RHO-1, Medium ban

    #163 - Llama 3, Grok-1.5 Vision, new Atlas robot, RHO-1, Medium ban

    Our 163rd episode with a summary and discussion of last week's big AI news!

    Note: apology for this one coming out a few days late, got delayed in editing it -Andrey

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enApril 24, 2024

    Related Episodes

    Bias in Twitter & Zoom, LAPD Facial Recognition, GPT-3 Exclusivity

    Bias in Twitter & Zoom, LAPD Facial Recognition, GPT-3 Exclusivity

    Our latest episode with a summary and discussion of last week's big AI news!

    This week Twitter and Zoom’s algorithmic bias issuesDespite past denials, LAPD has used facial recognition software 30,000 times in last decade, records show, We’re not ready for AI, says the winner of a new $1m AI prize, How humane is the UK’s plan to introduce robot companions in care homes?, OpenAI is giving Microsoft exclusive access to its GPT-3 language model

    0:00 - 0:40 Intro 0:40 - 5:00 News Summary segment 5:00 News Discussion segment

    Find this and more in our text version of this news roundup: https://www.skynettoday.com/digests/the-eighty-fourth

    Music: Deliberate Thought, Inspired by Kevin MacLeod (incompetech.com)

    Facial Recognition - Fate or Flaw

    Facial Recognition - Fate or Flaw

    Do you value your face? 
    Is your privacy important to you?
    Would you like it if law enforcement had your face labeled on file?

    In this episode, Darnley talks about the Clearview AI scraping of personal information and how this can affect our lives moving forward. Is our public information really public?

    Support the show

    Subscribe now to Darnley's Cyber Cafe and stay informed on the latest developments in the ever-evolving digital landscape.

    The End of Privacy as We Know It?

    The End of Privacy as We Know It?

    A secretive start-up promising the next generation of facial recognition software has compiled a database of images far bigger than anything ever constructed by the United States government: over three billion, it says. Is this technology a breakthrough for law enforcement — or the end of privacy as we know it?

    Guest: Annie Brown, a producer on “The Daily,” spoke with Kashmir Hill, a technology reporter for The New York Times. For more information on today’s episode, visit nytimes.com/thedaily.

    Background reading: