Logo
    Search

    Detecting Surveillance, Autonomous Weapons, National AI Compute Needs

    enFebruary 04, 2021

    Podcast Summary

    • AI and Ethical Concerns: Vaccine Distribution and Facial RecognitionAI can streamline vaccine distribution but raises ethical concerns, particularly with facial recognition technology which can infringe on civil rights, especially for certain demographic groups.

      Technology, while designed to make our lives easier, can also raise ethical concerns. Last week, we saw examples of this with the use of AI in vaccine distribution and facial recognition. Benjamin Warlock's Georgia Vax app uses AI to streamline vaccine appointment information, but we hope that more effective distribution systems will make such efforts unnecessary. On the other hand, Amnesty International's map of facial recognition usage in New York City highlights the civil rights dangers of this technology, particularly for certain demographic groups. Additionally, a Microsoft patent raised ethical concerns by proposing an AI-assisted chatbot constructed using personal data of the deceased. While Microsoft has not confirmed any plans to build such a chatbot, it serves as a reminder of the potential ethical implications of AI technology. Overall, it's important to stay informed and engaged in discussions surrounding the use of AI and its impact on our society.

    • AI reviving deceased celebrities' voices in South Korea raises ethical concernsSouth Korea leads in AI voice resurrection, raising ethical and legal issues. Amazon's crowdsourced maps reveal extensive surveillance capabilities, highlighting AI's potential implications.

      While Microsoft may have patented the idea, South Korea has taken the lead in using AI to resurrect the voices of deceased celebrities, with SBS planning to feature the voice of Kim Kwong-Sook in a new program. This raises ethical and legal concerns, and as AI becomes more prevalent in South Korea's economy and society, regulations and discussions around these issues will be necessary. Meanwhile, Amazon International's crowdsourced maps revealing the locations of surveillance cameras in New York City highlights the extensive capabilities of law enforcement in facial recognition technology. This story serves as a reminder of the importance of being aware of the prevalence and potential implications of such technology. For researchers in AI, these developments present opportunities to explore ethical and legal frameworks, as well as to counteract potential misuses of AI. As Sharon and I delve deeper into these topics in our upcoming discussion, we aim to provide insights and perspectives on these issues and their implications for the future.

    • Discovering Ethical Dilemmas of TechnologyStudents discussed ethical concerns of security cameras and AI weapons, considering privacy, potential misuse, and human casualties. Amnesty International's role in promoting transparency was emphasized.

      Technology, whether it's security cameras or artificial intelligence weapons, raises important ethical questions that need to be addressed. In the first discussion, students discovered the vast number of security cameras in their university and the role of Amnesty International in promoting transparency around facial recognition systems. While some argue that technology can lead to increased accuracy and fewer mistakes, others worry about privacy and potential misuse. The second article highlighted the US government's push to explore AI weapons, citing potential benefits like fewer human errors and reduced casualties. However, this idea is controversial, with some advocating for a ban on "killer robots." These debates underscore the need for thoughtful dialogue and policy-making around the use of technology in society.

    • Ethical concerns with AI-enabled weaponsThe development and deployment of AI-enabled weapons raises ethical concerns, including potential amplification of human biases, unintended consequences, and creation of a self-selecting group. A middle ground could be a treaty regulating their use and development.

      While the use of AI and robots in military applications, such as transporting heavy loads or disarming bombs, can offer benefits, the development and deployment of AI-enabled weapons raises significant ethical concerns. These concerns include the potential amplification of human biases, the risk of unintended consequences, and the possibility of creating a self-selecting group of individuals working on AI who may be less concerned about ethical implications. Some argue that testing and use of these weapons is necessary for improvement, but this creates a dangerous precedent. A middle ground could be a treaty regulating their use and development, with restrictions and expectations placed to prevent problematic uses. The importance of establishing boundaries and regulations for AI applications, as seen with self-driving cars and surveillance, cannot be overstated.

    • OECD Initiative to Quantify National Compute Needs for AIOECD is leading an initiative to assess compute requirements of national governments for AI research and development, providing valuable insights for policy and regulation.

      The Organization for Economic Cooperation and Development (OECD) is working on quantifying the compute needs of national governments for AI research and development. This initiative, led by Jack Clark, former AI Policy Director of OpenAI and co-chair of the AI index, aims to help governments set policy and regulation around AI by understanding their compute requirements. This effort aligns with the growing trend of quantifying the state of AI and follows the suggestion of having a national AI cloud for the US. The process will begin by assessing compute levels in government-owned data centers and supercomputers, and then move on to national AI clouds owned by sovereign governments. This will provide valuable insights into the compute needs of various countries, including the US and China. On a related note, there continues to be a need for addressing problematic AI, as seen in a recent instance where an AI completed a cropped photo of US Congress member Alexandria Ocasio-Cortez wearing a bikini, highlighting the importance of ethical considerations in AI development.

    • Study reveals gender bias in OpenAI's image completion modelResearchers found that OpenAI's IGBT model completes images with gender biases based on societal biases in training data, emphasizing the importance of transparency and accountability in AI development and deployment.

      A recent study by researchers Ryan Steed and Alan Calliskan revealed concerning biases in OpenAI's IGBT model, which completes images instead of text. The model, when given a cropped image of a man at face level, autocompleted him wearing a suit 33% of the time. However, when given an image of a woman, like U.S. Representative Alexandria Ocasio-Cortez, the model autocompleted her wearing a low-cut top or bikini 53% of the time. This bias emerged due to societal biases in the training data, which are often reflected in images scraped from the internet. This is not a new issue, as similar biases have been observed in language models. The researchers emphasized the importance of being cautious with data and models and being aware of their potential biases. They encouraged companies to be more transparent and publish their models for checking, and for researchers to do more testing before releasing or deploying vision models. These concerning results highlight the need for greater scrutiny and accountability in the development and deployment of advanced AI models.

    Recent Episodes from Last Week in AI

    #172 - Claude and Gemini updates, Gemma 2, GPT-4 Critic

    #172 - Claude and Gemini updates, Gemma 2, GPT-4 Critic

    Our 172nd episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    Last Week in AI
    enJuly 01, 2024

    #171 - - Apple Intelligence, Dream Machine, SSI Inc

    #171 - - Apple Intelligence, Dream Machine, SSI Inc

    Our 171st episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 24, 2024

    #170 - new Sora rival, OpenAI robotics, understanding GPT4, AGI by 2027?

    #170 - new Sora rival, OpenAI robotics, understanding GPT4, AGI by 2027?

    Our 170th episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 09, 2024

    #169 - Google's Search Errors, OpenAI news & DRAMA, new leaderboards

    #169 - Google's Search Errors, OpenAI news & DRAMA, new leaderboards

    Our 168th episode with a summary and discussion of last week's big AI news!

    Feel free to leave us feedback here: https://forms.gle/ngXvXZpNJxaAprDv6

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 03, 2024

    #168 - OpenAI vs Scar Jo + safety researchers, MS AI updates, cool Anthropic research

    #168 - OpenAI vs Scar Jo + safety researchers, MS AI updates, cool Anthropic research

    Our 168th episode with a summary and discussion of last week's big AI news!

    With guest host Gavin Purcell from AI for Humans podcast!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enMay 28, 2024

    #167 - GPT-4o, Project Astra, Veo, OpenAI Departures, Interview with Andrey

    #167 - GPT-4o, Project Astra, Veo, OpenAI Departures, Interview with Andrey

    Our 167th episode with a summary and discussion of last week's big AI news!

    With guest host Daliana Liu (https://www.linkedin.com/in/dalianaliu/) from The Data Scientist Show!

    And a special one-time interview with Andrey in the latter part of the podcast.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 19, 2024

    #166 - new AI song generator, Microsoft's GPT4 efforts, AlphaFold3, xLSTM, OpenAI Model Spec

    #166 - new AI song generator, Microsoft's GPT4 efforts, AlphaFold3, xLSTM, OpenAI Model Spec

    Our 166th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 12, 2024

    #165 - Sora challenger, Astribot's S1, Med-Gemini, Refusal in LLMs

    #165 - Sora challenger, Astribot's S1, Med-Gemini, Refusal in LLMs

    Our 165th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 05, 2024

    #164 - Meta AI, Phi-3, OpenELM, Bollywood Deepfakes

    #164 - Meta AI, Phi-3, OpenELM, Bollywood Deepfakes

    Our 164th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enApril 30, 2024

    #163 - Llama 3, Grok-1.5 Vision, new Atlas robot, RHO-1, Medium ban

    #163 - Llama 3, Grok-1.5 Vision, new Atlas robot, RHO-1, Medium ban

    Our 163rd episode with a summary and discussion of last week's big AI news!

    Note: apology for this one coming out a few days late, got delayed in editing it -Andrey

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enApril 24, 2024

    Related Episodes

    Reflecting on AI news in 2021 (so far) with the host of the Towards Data Science Podcast

    Reflecting on AI news in 2021 (so far) with the host of the Towards Data Science Podcast
    2021 has been a bit less crazy than 2020 so far, but plenty of notable stuff has already happened. So, we decided to partner with our friends over at the Towards Data Science podcast, hosted by co-founder of ShapestMinds Jeremie Harris.

    Specifically, we discuss: This avocado armchair could be the future of AI, For Its Latest Trick

    Facial-Recognition Tools in Spotlight in New Jersey False Arrest Case

    New' Nirvana Song Created 27 Years After Kurt Cobain's Death Via AI Software

    As well as the general trends these stories represent.

    Subscribe: RSS | iTunes | Spotify | YouTube

    Music: Deliberate Thought, Inspired by Kevin MacLeod (incompetech.com)

    AI Ethics at Code 2023

    AI Ethics at Code 2023
    Platformer's Casey Newton moderates a conversation at Code 2023 on ethics in artificial intelligence, with Ajeya Cotra, Senior Program Officer at Open Philanthropy, and Helen Toner, Director of Strategy at Georgetown University’s Center for Security and Emerging Technology. The panel discusses the risks and rewards of the technology, as well as best practices and safety measures. Recorded on September 27th in Los Angeles. Learn more about your ad choices. Visit podcastchoices.com/adchoices

    DeepNude Bot, Tesla Full Self Driving, Google AI US-Mexico Border

    DeepNude Bot, Tesla Full Self Driving, Google AI US-Mexico Border

    This week Automating Image Abuse: deepfake bots on Telegram, Activists Turn Facial Recognition Tools Against the Police, Tesla is putting ‘self driving’ in the hands of drivers amid criticism the tech is not ready, Google AI tech will be used for virtual border wall, CBP contract shows

    0:00 - 0:40 Intro 0:40 - 5:40 News Summary segment 5:40 News Discussion segment

    Find this and more in our text version of this news roundup:  https://www.skynettoday.com/digests/the-eighty-eighth

    Music: Deliberate Thought, Inspired by Kevin MacLeod (incompetech.com)

    GPT-3 on reddit, Facial Recognition in Argentina, Stats on Big Tech Financing Academics

    GPT-3 on reddit, Facial Recognition in Argentina, Stats on Big Tech Financing Academics

    Our latest episode with a summary and discussion of last week's big AI news!

    This week A GPT-3 bot posted comments on Reddit for a week and no one noticed , Live facial recognition is tracking kids suspected of being criminals, Many Top AI Researchers Get Financial Backing From Big Tech

    0:00 - 0:40 Intro 0:40 - 5:00 News Summary segment 5:00 News Discussion segment

    Find this and more in our text version of this news roundup:  https://www.skynettoday.com/digests/the-eighty-sixth

    Music: Deliberate Thought, Inspired by Kevin MacLeod (incompetech.com)

    DeepFake Dictators, AI Sepsis Watch, Biased Exam Monitoring

    DeepFake Dictators, AI Sepsis Watch, Biased Exam Monitoring

    Our latest episode with a summary and discussion of last week's big AI news!

    This week  How an AI tool for fighting hospital deaths actually worked in the real world, ExamSoft’s remote bar exams sparks privacy and facial recognition concerns, Deepfake Putin is here to warn American’s about their self-inflicted doom

    0:00 - 0:40 Intro 0:40 - 5:00 News Summary segment 5:00 News Discussion segment

    Find this and more in our text version of this news roundup: https://www.skynettoday.com/digests/the-eighty-fifth

    Music: Deliberate Thought, Inspired by Kevin MacLeod (incompetech.com)