Logo
    Search

    DeepNude Bot, Tesla Full Self Driving, Google AI US-Mexico Border

    enOctober 29, 2020

    Podcast Summary

    • Deep-fake ecosystem on Telegram uses AI to target women's privacyAI technology poses risks to privacy and security, with deep-fakes being a disturbing use case. Regulation is needed to prevent misuse, but striking a balance between innovation and regulation is crucial.

      While the advancement of AI technology brings numerous benefits, it also poses significant risks, particularly in areas of privacy and security. Last week, Sensity AI reported on a deep-fake ecosystem on Telegram that allows users to strip naked images of women using AI technology. Over 100,000 women have been targeted, with the majority being private individuals. This disturbing use of AI highlights the need for responsible use and regulation of the technology. On the policy front, the Trump administration is finalizing guidance for agencies on regulating AI, while in Europe, organizations are pushing for stronger regulations to prevent human rights abuses. The debate around regulation highlights the tension between the desire for unfettered innovation and the need to prevent misuse of the technology. While significant regulation in the US seems unlikely, the EU is considering a bold approach that emphasizes human rights. The news comes at an interesting time, with the antitrust suit against Google also making headlines. The regulation of AI is a complex issue that requires careful consideration and balance. It's essential to strike a balance between innovation and regulation to ensure that AI is used ethically and responsibly.

    • EU's hesitation to discuss fundamental rights and lack of legislative solution for facial recognition technologyThe EU's current framework for facial recognition technology is under criticism for its inability to address fundamental rights concerns and propose a legislative solution, while individuals and researchers continue to innovate and face challenges in the field.

      While there are ongoing debates about the use of ethical AI and facial recognition technology, there is a growing concern about the potential infringement of fundamental rights and the need for a legislative framework to address these issues. In the European Union, there is disappointment with the current framework, which has been criticized for its hesitation to discuss fundamental rights and failure to propose a legislative solution. In contrast, individuals like Christopher Howell in the United States are using facial recognition technology against law enforcement in response to concerns about officer anonymity. However, this could lead to further animosity and the need for clear regulations. In research, PhD students like Andrea and Andre are working on advancing AI technology while navigating deadlines and challenges, highlighting the importance of continued innovation and progress in the field.

    • Deep Fake Bot Strips Naked Images on TelegramA deep fake bot on Telegram can easily strip naked images of women, posing new challenges for online safety and privacy, and highlighting the need to address negative uses of AI technology.

      Technology, specifically deep fake technology, continues to evolve and pose new challenges, particularly in the realm of online safety and privacy. A recent development involves a deep fake bot on Telegram that strips naked images of women, building upon the earlier deep nude phenomenon. This technology is concerning due to its ease of use and potential monetization, as well as its prevalence on a platform with over 100,000 users. The recurrence of negative applications of AI is problematic, and it remains to be seen how effectively such issues will be addressed. The internet, unfortunately, has a history of contributing to harmful trends, such as revenge porn, and it is crucial that efforts are made to combat these negative uses of technology.

    • The Impact of AI Facial Recognition: A Double-Edged SwordAI facial recognition technology has both positive and negative uses. It can be used for malicious activities, but also for promoting accountability. The balance between accountability and privacy concerns is crucial.

      AI technology, specifically facial recognition, can be used for both positive and negative purposes. In the negative sense, it can be used for malicious activities such as non-consensual sharing of images. On the positive side, it can be used to promote accountability, such as identifying law enforcement officers involved in violent actions against protesters. The use of AI in this way is a double-edged sword, and its impact depends on how it is wielded. The recent trend of activists using facial recognition technology against law enforcement is a promising sign, as it shows that the power of this technology is not one-sided. However, it also serves as a reminder of the increasing use of facial recognition by law enforcement and the importance of balancing accountability with privacy concerns.

    • Addressing Ethical Concerns with Technology: Facial Recognition and Self-Driving CarsFacial recognition raises privacy concerns and potential misuse, while self-driving cars require clear guidelines and transparency to ensure ethical use. Organizations and companies play a role in addressing these ethical dilemmas.

      Technology, whether it's facial recognition software or self-driving cars, raises important ethical questions that need to be addressed. In the case of facial recognition, there are concerns about privacy and potential misuse. Organizations like the Anati surveillance group in Chicago have taken matters into their own hands by creating public databases to help verify the identities of law enforcement officers. As for self-driving cars, Tesla is making strides in the technology but is facing criticism for mislabeling its features. The autopilot function, while impressive, still requires human intervention and should not be considered full self-driving. These advancements offer exciting possibilities, but it's crucial to have clear guidelines and transparency to ensure their ethical use.

    • Advanced technologies come with risks and ethical concernsClear communication and ethical considerations are crucial when integrating advanced technologies to prevent misunderstandings and potential harm. Regulation and legal frameworks need to evolve to address complexities and prevent misuse.

      The integration of advanced technologies like autonomous driving and artificial intelligence in various industries, while promising, comes with significant risks and ethical concerns. The discussion highlighted the dangers of over-hyping and misbranding these technologies, as seen in the case of Tesla's Autopilot system, which has been involved in several fatal crashes. The use of Google AI technology for a virtual border wall is another example of the potential misuse of advanced technologies, raising ethical concerns and a dystopian future. It's crucial to ensure clear communication about the limitations and potential risks of these technologies to prevent misunderstandings and potential harm. Regulation and legal frameworks also need to evolve to address the complexities of these technologies and their applications. Overall, the integration of advanced technologies offers exciting possibilities, but it's essential to approach them with caution and transparency.

    • Google's Ethical Dilemma: Working on Defense Startup GrosnologyGoogle, known for its ethical stance, is facing internal opposition and ethical questions for its new defense startup, Grosnology, which uses drones and AI for border surveillance.

      Google, a tech giant known for its ethical stance, is now working on a defense startup, Grosnology, which uses drones and AI for border surveillance. This comes after the company rejected a similar project, Maven, in 2018 due to ethical concerns. The use of drones and AI for border surveillance is controversial and raises ethical questions, especially considering Google's past stance on such issues. In 2019, over 1000 Google employees signed a petition asking the company to abstain from providing cloud services to US immigration and border patrol authorities. It is unclear why Google is pursuing this project despite internal opposition and past ethical commitments. The timing of this revelation, less than a week before the US election, may add to the controversy. It remains to be seen how this development will be received by the public and Google employees.

    • Exploring the Ethical Implications of AIAI is transforming society, but it's crucial to consider ethical implications like privacy, job displacement, and transparency. Develop and deploy AI responsibly with a focus on inclusivity and fairness.

      Artificial intelligence (AI) is increasingly becoming a part of our daily lives, from virtual assistants like Siri and Alexa, to recommendation systems on streaming platforms, and even in the field of healthcare diagnosis. However, as we continue to integrate AI into various aspects of our society, it's essential to consider the ethical implications, such as privacy concerns, potential job displacement, and the need for transparency and accountability. Additionally, it's crucial to ensure that AI is developed and deployed responsibly, with a focus on inclusivity and fairness. So, while AI offers numerous benefits, it's essential to approach its use with caution and consideration for its potential impact on individuals and society as a whole. Stay informed and engaged in the conversation around AI by subscribing to Skynet Today's Let's Talk AI Podcast and visiting skynetday.com for weekly news and insights. Don't forget to leave a review or rating if you enjoy the show!

    Recent Episodes from Last Week in AI

    #172 - Claude and Gemini updates, Gemma 2, GPT-4 Critic

    #172 - Claude and Gemini updates, Gemma 2, GPT-4 Critic

    Our 172nd episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    Last Week in AI
    enJuly 01, 2024

    #171 - - Apple Intelligence, Dream Machine, SSI Inc

    #171 - - Apple Intelligence, Dream Machine, SSI Inc

    Our 171st episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 24, 2024

    #170 - new Sora rival, OpenAI robotics, understanding GPT4, AGI by 2027?

    #170 - new Sora rival, OpenAI robotics, understanding GPT4, AGI by 2027?

    Our 170th episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 09, 2024

    #169 - Google's Search Errors, OpenAI news & DRAMA, new leaderboards

    #169 - Google's Search Errors, OpenAI news & DRAMA, new leaderboards

    Our 168th episode with a summary and discussion of last week's big AI news!

    Feel free to leave us feedback here: https://forms.gle/ngXvXZpNJxaAprDv6

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 03, 2024

    #168 - OpenAI vs Scar Jo + safety researchers, MS AI updates, cool Anthropic research

    #168 - OpenAI vs Scar Jo + safety researchers, MS AI updates, cool Anthropic research

    Our 168th episode with a summary and discussion of last week's big AI news!

    With guest host Gavin Purcell from AI for Humans podcast!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enMay 28, 2024

    #167 - GPT-4o, Project Astra, Veo, OpenAI Departures, Interview with Andrey

    #167 - GPT-4o, Project Astra, Veo, OpenAI Departures, Interview with Andrey

    Our 167th episode with a summary and discussion of last week's big AI news!

    With guest host Daliana Liu (https://www.linkedin.com/in/dalianaliu/) from The Data Scientist Show!

    And a special one-time interview with Andrey in the latter part of the podcast.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 19, 2024

    #166 - new AI song generator, Microsoft's GPT4 efforts, AlphaFold3, xLSTM, OpenAI Model Spec

    #166 - new AI song generator, Microsoft's GPT4 efforts, AlphaFold3, xLSTM, OpenAI Model Spec

    Our 166th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 12, 2024

    #165 - Sora challenger, Astribot's S1, Med-Gemini, Refusal in LLMs

    #165 - Sora challenger, Astribot's S1, Med-Gemini, Refusal in LLMs

    Our 165th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 05, 2024

    #164 - Meta AI, Phi-3, OpenELM, Bollywood Deepfakes

    #164 - Meta AI, Phi-3, OpenELM, Bollywood Deepfakes

    Our 164th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enApril 30, 2024

    #163 - Llama 3, Grok-1.5 Vision, new Atlas robot, RHO-1, Medium ban

    #163 - Llama 3, Grok-1.5 Vision, new Atlas robot, RHO-1, Medium ban

    Our 163rd episode with a summary and discussion of last week's big AI news!

    Note: apology for this one coming out a few days late, got delayed in editing it -Andrey

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enApril 24, 2024

    Related Episodes

    AI Ethics at Code 2023

    AI Ethics at Code 2023
    Platformer's Casey Newton moderates a conversation at Code 2023 on ethics in artificial intelligence, with Ajeya Cotra, Senior Program Officer at Open Philanthropy, and Helen Toner, Director of Strategy at Georgetown University’s Center for Security and Emerging Technology. The panel discusses the risks and rewards of the technology, as well as best practices and safety measures. Recorded on September 27th in Los Angeles. Learn more about your ad choices. Visit podcastchoices.com/adchoices

    Clearview AI in the Capitol, Medical AI Regulation, DeepFake Text

    Clearview AI in the Capitol, Medical AI Regulation, DeepFake Text

    This week:

    0:00 - 0:35 Intro 0:35 - 4:30 News Summary segment 4:30 News Discussion segment

    Find this and more in our text version of this news roundup:  https://lastweekin.ai/p/99

    Music: Deliberate Thought, Inspired by Kevin MacLeod (incompetech.com)

    "Is Twitter the same as what you do?"

    "Is Twitter the same as what you do?"

    In previous episodes we’ve looked at this issue of tech dominance from several angles. We’ve seen how they gather data, what they can do with it, how they make money from it, and how social media allows them to muddy the water and shape the narrative. You may have asked yourself along the way, “how can they do this? Why doesn’t the government do anything about it?” 

    The seven largest tech companies spent nearly $500 million dollars lobbying Congress in the last decade. That sounds like a huge amount of money, but when you consider that they’ve gained trillions in market value during the same period of time, it was just money well spent.

    This episode, we’ll dive into why government has dropped the ball on tech regulation to such an extreme degree, and what they might be able to still do to rein in the worst of these big tech behaviors. 

    Featured guests this episode: 

    K Krasnow Waterman was the Chief Information Officer of the first post-9/11 data analytics facility established by the White House and, next, led the reorganization of the FBI's intelligence operations. She has held a multitude of roles across the government and business worlds, as well as being a Sloan Fellow at the Massachusetts Institute of Technology.

    Michael Slaby was the Chief Technology Officer of Obama for America in 2008. In 2012, he rejoined the campaign as Chief Integration and Innovation Officer. When the campaign finished, he began work on social impact organizations that leverage technology to create social movements. Today, he's the Chief Strategist at Harmony Labs

    Jonathon Morgan is the founder of Yonder, a fast-growing Authentic Internet company on a mission to give the online world the same amount of authentic cultural context as the offline world. Using artificial intelligence, we help organizations identify the groups and narratives that drive conversation, revealing what matters and creating the confidence to act.

    Matt Stoller is a fellow at the Open Markets Institute and the author of Goliath: The Hundred Year War Between Monopoly Power and Democracy

    Katelyn Ringrose is a Christopher Wolf Diversity Fellow at the Future of Privacy Forum. She currently works on health and genetics privacy issues, and is tracking state and federal privacy legislation.

    A lawyer walks into Radio City Music Hall...

    A lawyer walks into Radio City Music Hall...

    Today on The Professionally Evil Perspective, Kevin and Nathan discuss the removal of an attorney attending a show with her daughter at Radio City Music Hall in December. The attorney was employed by a law firm involved in a persoanl injury claim against the operator of Radio City Music Hall. The attorney was recognized through a facial recognition system.

    facial-recognition-bars-lawyer-rockettes-show

    Got suggestions, complaints, or feedback?

    Tell us at podcast@secureideas.com or reach out on Twitter:
    @sweaney
    @darth_kevin
    @secureideas

    or find us on Mastadon:
    @secureideas

    Join our Professionally Evil Slack Team at www.professionallyevil.com
    Our real jobs pay for our time to do this, so if you have opportunities around penetration testing or risk management, we'd love the chance to work with you!

    Mini Episode: More Facial Recognition, Racism in Academia, and the latest in Commercial AI

    Mini Episode: More Facial Recognition, Racism in Academia, and the latest in Commercial AI

    Our fourth audio roundup of last week's noteworthy AI news!

    This week, we look at recent progress in curtailing the development of facial recognition technology, a recent call for attention to racism in academia, and recent news from OpenAI and Boston Dynamics. 

    Check out all the stories discussed here and more at www.skynettoday.com

    Theme: Deliberate Thought by Kevin McLeod (incompetech.com)

    Licensed under Creative Commons: By Attribution 3.0 License