Logo
    Search

    Tesla Deaths, 2.6 Million DeepFakes, Europe AI Regulations

    enApril 22, 2021

    Podcast Summary

    • AI Industry Advancements and Ethical DebatesAI is advancing rapidly, but concerns over workforce shortages and ethical issues persist. The EU is proposing bans on certain AI applications, while the industry debates the need for domestic talent and ethical guidelines.

      The field of AI is rapidly advancing with new developments and applications emerging, such as AI-generated people and synthetic influencers. However, there are also concerns about the workforce shortage in AI and the need to cultivate domestic talent to stay competitive. At the same time, regulatory bodies like the EU are considering bans on certain AI applications for ethical reasons. Daniel Beshear summarized last week's news, including the launch of a company that generates virtual avatars, the potential AI workforce shortage, and the EU's proposed ban on certain AI applications. While the industry for synthetic humans is growing, there is debate over whether there is an actual AI workforce shortage and what can be done to address it. The EU's proposed ban on AI for mass surveillance and self-driving cars highlights the ongoing ethical debates surrounding AI. As AI continues to evolve, it will be important for policymakers, industry leaders, and researchers to engage in meaningful discussions and find solutions to the challenges and opportunities it presents.

    • Regulating AI in High-Risk Areas: EU's ApproachThe EU is proposing regulations to limit the use of AI in high-risk areas like emergency services, education, and recruitment. Concerns have been raised about vague language in proposed regulations, which could leave room for loopholes. Regulation is necessary to prevent AI systems from manipulating users or impacting access to essential services.

      There is ongoing discussion about regulating the use of AI in society, specifically in high-risk areas such as emergency services, education, and recruitment. Daniel Lufer, a policy analyst at AccessNow, has raised concerns about vague descriptions in proposed regulations, which could leave room for loopholes and evasion by lesser-known vendors. Big tech companies have faced increased scrutiny over their AI research and development, leading to the use of vague language to convey responsible practices. The EU is currently considering regulations to limit the use of AI in these high-risk areas, and it will be interesting to see how this unfolds. Andre Krenkov, a third-year PhD student at Stanford, and Dr. Sharon, a graduating fourth-year PhD student, discussed this topic in the context of the BBC article "Europe seeks to limit use of AI in society." They agreed that regulation is necessary, particularly in areas where AI systems can manipulate users or impact access to essential services. The EU is leading the way in this conversation, and it will be important to monitor the progress of these regulations and the potential impact on various industries and stakeholders.

    • EU Regulating AI Technology: Deep Fakes and High-Risk SystemsThe EU is requiring disclosure of deep fakes in AI technology sales to schools, hospitals, police, and employers, and establishing a database of high-risk AI systems. Tesla crash highlights the need for clear regulations on advanced AI technology.

      The European Union is taking steps towards regulating AI technology, specifically in the context of deep fakes and high-risk systems. The EU is requiring vendors and consultants selling AI technology to schools, hospitals, police, and employers to disclose when they're using deep fakes. Additionally, there is a clause about limiting tech firms who use AI to manipulate users, although the definition of manipulation is vague and open to interpretation. The EU is also establishing a publicly viewable database of high-risk systems of AI used in the EU as a first step in an ongoing effort to track and restrict AI. Meanwhile, a Tesla crash involving no one in the driver's seat raised ethical concerns about the safety and potential consequences of advanced AI technology. The incident highlights the need for clear regulations and guidelines to ensure the safe implementation and use of AI technology. The EU's initiatives are a step in the right direction, but it will take time for the regulations to be refined and implemented effectively. The Tesla crash serves as a reminder of the potential risks and consequences of advanced AI technology and the importance of addressing these issues through thoughtful and comprehensive regulations.

    • Tesla's Autopilot: 23 Crashes Under Investigation, Some Without DriversDespite Tesla's claims of Autopilot's safety, concerns rise as 23 crashes are under investigation, some involving cars operating without drivers. Researchers suggest using facial recognition, eye tracking, or other methods to ensure driver engagement.

      The number of reported crashes involving Tesla's Autopilot system is higher than expected, with 23 incidents currently under investigation. The most concerning aspect is that in some cases, the car was operating without anyone in the driver's seat. This goes against the system's design to ensure driver engagement. There have been discussions about the need for more active measures to encourage drivers to pay attention, as there have been instances of people sleeping or having their eyes closed while the car was in use. Some researchers suggest using facial recognition, eye tracking, or other methods to measure driver engagement. The concern is that Tesla may view such incidents as user error, and therefore not their responsibility. However, there is a risk that the technology could encourage dangerous behavior, as seen with Snapchat's speed filter. Google and Waymo have taken a cautious approach and opted against a hybrid system due to concerns about human attention and readiness to take over when needed. Tesla released a vehicle safety report soon after these incidents, suggesting that Autopilot is safer than manual driving, but the crashes raise valid concerns about the system's reliability and the importance of active driver engagement.

    • Self-driving cars have fewer accidents with active safety features and autopilotSelf-driving cars have fewer accidents than human-driven cars, but active safety features and autopilot lead to even fewer. Ethical concerns arise with deepfakes and AI-generated people in marketing and social media.

      While self-driving cars have fewer accidents per mile compared to human-driven cars, the use of active safety features and autopilot leads to even fewer accidents. However, the comparison may not be entirely fair due to the different driving scenarios these technologies are used in. The use of deepfakes and AI-generated people in marketing and social media, on the other hand, raises ethical concerns as these models are often based on real people without their consent. The future of synthetic influencers and deepfakes is uncertain, but it's clear that ethical considerations and informed opinions are necessary to navigate these emerging trends.

    • Use of AI in creating synthetic personalities and potential for unique conversationsThe use of AI in creating synthetic personalities offers unique and fully improvised conversations but can also lead to historical patterns of abuse against marginalized communities, as demonstrated by the case of two black women AI researchers who faced a campaign of discrediting and gaslighting.

      The use of AI in creating synthetic personalities, particularly in advertising and video games, is becoming a trend and has the potential to offer unique and fully improvised conversations. However, it's important to note that the use of AI can also lead to historical patterns of abuse, as demonstrated in the case of two black women AI researchers who faced a spear campaign of discrediting and gaslighting after revealing the failure of commercially available facial analysis tools for women with dark skin. This incident is a manifestation of ongoing issues and highlights the need to recognize and outline the tactics used in such abusive campaigns. The playbook published by MIT researchers provides a precise outline of the techniques used, including disbelief in their contributions, dismissal, discrediting, and gaslighting. It's crucial to acknowledge the complexity of these issues and the importance of standing up against such abusive behavior towards marginalized communities in the tech industry.

    • Understanding the Steps of Online HarassmentOnline harassment against marginalized individuals involves identification, escalation, erasure, exclusion, and revisionism, making it challenging for them to regain control of their online presence and reputation. Acknowledging and explaining these steps is a crucial first step in education and providing resources.

      Online harassment against marginalized individuals involves a series of steps: identification, escalation, erasure, exclusion, and revisionism. These steps can make it difficult for those targeted to regain control of their online presence and reputation. Having a clear understanding and explanation of these steps can help educate others and provide resources for those affected. It's important to acknowledge that just talking about these issues isn't always enough, but having explanatory content like this is a good first step. To learn more and stay informed, visit skynettoday.com, subscribe to our weekly newsletter, and tune in to next week's episode of Skanger Days Let's Talk AI Podcast.

    Recent Episodes from Last Week in AI

    #171 - - Apple Intelligence, Dream Machine, SSI Inc

    #171 - - Apple Intelligence, Dream Machine, SSI Inc

    Our 171st episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 24, 2024

    #170 - new Sora rival, OpenAI robotics, understanding GPT4, AGI by 2027?

    #170 - new Sora rival, OpenAI robotics, understanding GPT4, AGI by 2027?

    Our 170th episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 09, 2024

    #169 - Google's Search Errors, OpenAI news & DRAMA, new leaderboards

    #169 - Google's Search Errors, OpenAI news & DRAMA, new leaderboards

    Our 168th episode with a summary and discussion of last week's big AI news!

    Feel free to leave us feedback here: https://forms.gle/ngXvXZpNJxaAprDv6

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 03, 2024

    #168 - OpenAI vs Scar Jo + safety researchers, MS AI updates, cool Anthropic research

    #168 - OpenAI vs Scar Jo + safety researchers, MS AI updates, cool Anthropic research

    Our 168th episode with a summary and discussion of last week's big AI news!

    With guest host Gavin Purcell from AI for Humans podcast!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enMay 28, 2024

    #167 - GPT-4o, Project Astra, Veo, OpenAI Departures, Interview with Andrey

    #167 - GPT-4o, Project Astra, Veo, OpenAI Departures, Interview with Andrey

    Our 167th episode with a summary and discussion of last week's big AI news!

    With guest host Daliana Liu (https://www.linkedin.com/in/dalianaliu/) from The Data Scientist Show!

    And a special one-time interview with Andrey in the latter part of the podcast.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 19, 2024

    #166 - new AI song generator, Microsoft's GPT4 efforts, AlphaFold3, xLSTM, OpenAI Model Spec

    #166 - new AI song generator, Microsoft's GPT4 efforts, AlphaFold3, xLSTM, OpenAI Model Spec

    Our 166th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 12, 2024

    #165 - Sora challenger, Astribot's S1, Med-Gemini, Refusal in LLMs

    #165 - Sora challenger, Astribot's S1, Med-Gemini, Refusal in LLMs

    Our 165th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 05, 2024

    #164 - Meta AI, Phi-3, OpenELM, Bollywood Deepfakes

    #164 - Meta AI, Phi-3, OpenELM, Bollywood Deepfakes

    Our 164th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enApril 30, 2024

    #163 - Llama 3, Grok-1.5 Vision, new Atlas robot, RHO-1, Medium ban

    #163 - Llama 3, Grok-1.5 Vision, new Atlas robot, RHO-1, Medium ban

    Our 163rd episode with a summary and discussion of last week's big AI news!

    Note: apology for this one coming out a few days late, got delayed in editing it -Andrey

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enApril 24, 2024

    #162 - Udio Song AI, TPU v5, Mixtral 8x22, Mixture-of-Depths, Musicians sign open letter

    #162 - Udio Song AI, TPU v5, Mixtral 8x22, Mixture-of-Depths, Musicians sign open letter

    Our 162nd episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enApril 15, 2024

    Related Episodes

    Mini Episode: Redeeming AI, More Lessons in AI Bias, and a National AI Research Cloud

    Mini Episode: Redeeming AI, More Lessons in AI Bias, and a National AI Research Cloud

    Our seventh audio roundup of last week's noteworthy AI news!

    This week, we look at how an HBO documentary is using deepfake technology for good, a new system to measure AI's carbon impact, a follow-up from last week's story on Timnit Gebru and Yann LeCun, and finally the push for a national AI research cloud.

    Check out all the stories discussed here and more at www.skynettoday.com

    Theme: Deliberate Thought by Kevin McLeod (incompetech.com)

    Licensed under Creative Commons: By Attribution 3.0 License

    #153 - Taylor Swift Deepfakes, ChatGPT features, Meta-Prompting, two new US bills

    #153 - Taylor Swift Deepfakes, ChatGPT features, Meta-Prompting, two new US bills

    Our 153rd episode with a summary and discussion of last week's big AI news!

    Check out our sponsor, the SuperDataScience podcast. You can listen to SDS across all major podcasting platforms (e.g., Spotify, Apple Podcasts, Google Podcasts) plus there’s a video version on YouTube.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    #152 - live translation on phones, Meta aims at AGI, AlphaGeometry, political deepfakes

    #152 - live translation on phones, Meta aims at AGI, AlphaGeometry, political deepfakes

    Our 152nd episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or jeremie@gladstone.ai

    Timestamps + links:

    Why MI5 is so worried about AI and the next election

    Why MI5 is so worried about AI and the next election

    This week world leaders and AI companies will gather for a summit at Bletchley Park, the Second World War code-breaking centre. It’s the most important attempt yet to formulate a shared view on what artificial intelligence might be capable of and how it should be regulated.

    But with elections taking place in both the US and the UK in the next year or so, could the threat posed by AI deepfakes to democracy be much more immediate, as the head of MI5 has warned?

    This podcast was brought to you thanks to the support of readers of The Times and The Sunday Times. Subscribe today: thetimes.co.uk/storiesofourtimes. 

    Guest: Henry Ajder, Visiting Senior Research Associate, Jesus College, Cambridge.

    Host: Manveen Rana.

    Get in touch: storiesofourtimes@thetimes.co.uk

    Clips: Zach Silberberg on Twitter, Telegram, CNN, ABC News, MSNBC, WUSA9, BBC Radio 4.



    Hosted on Acast. See acast.com/privacy for more information.

    GPT-3 on reddit, Facial Recognition in Argentina, Stats on Big Tech Financing Academics

    GPT-3 on reddit, Facial Recognition in Argentina, Stats on Big Tech Financing Academics

    Our latest episode with a summary and discussion of last week's big AI news!

    This week A GPT-3 bot posted comments on Reddit for a week and no one noticed , Live facial recognition is tracking kids suspected of being criminals, Many Top AI Researchers Get Financial Backing From Big Tech

    0:00 - 0:40 Intro 0:40 - 5:00 News Summary segment 5:00 News Discussion segment

    Find this and more in our text version of this news roundup:  https://www.skynettoday.com/digests/the-eighty-sixth

    Music: Deliberate Thought, Inspired by Kevin MacLeod (incompetech.com)