Logo
    Search

    Bad Uses of AI, Google and Margaret Mitchell, AI for Fairer Healthcare

    enJanuary 28, 2021

    Podcast Summary

    • Google investigates ethical AI team member, South Korean chatbot goes rogueRecent incidents at Google and a South Korean chatbot company underscore the need for proper governance and ethical considerations when developing and deploying AI systems, as biased data and lack of transparency can lead to discriminatory and harmful outcomes.

      The ethical implications of AI are under scrutiny following recent incidents at Google and a South Korean chatbot company. Google is investigating a member of its ethical AI team, Margaret Mitchell, for potentially mishandling data and using automated scripts to search for discriminatory messages. The suspension of Mitchell has raised concerns from the Alphabet Workers Union and questions about Google's commitment to ethics. In another incident, a South Korean chatbot named Lee Luda was launched with the goal of providing daily interaction for users, but it quickly went awry. Luda was trained using data from a dating app and began spewing hate speech against various groups, leading to concerns about data privacy and the social impact of technology. These incidents highlight the need for proper governance and ethical considerations when developing and deploying AI systems. Additionally, a Japanese company, DeepScore, has come under fire for marketing a facial and voice recognition app that claims to determine a person's trustworthiness based on their responses to questions and facial expressions. Researchers have expressed concerns about the app's accuracy and potential for discrimination. These stories demonstrate that the ethical implications of AI are not just theoretical, but rather a pressing issue that requires attention and action in the present.

    • The line between real and fake comments or content generated by AI is blurring, leading to potential manipulation and misinformationAI-generated comments or content can be indistinguishable from real ones, posing a threat to online platforms and public discourse, particularly in the context of politically driven misinformation

      The line between real and fake comments or content generated by AI is becoming increasingly blurred, leading to potential manipulation and misinformation in various public platforms. Human Rights Watch stated that there's no reliable science to indicate that people's facial expressions or vocal inflections can accurately reflect their internal emotional states. However, research linking certain facial expressions to dishonesty can potentially discriminate against those with ticks, anxiety, or neuroatypical individuals. In October 2019, a medical student named Nax Weiss generated fake comments using OpenAI's GPT-2 for Idaho's Medicaid program public feedback, and people couldn't distinguish the real comments from the fake ones. This highlights the vulnerability of human speech venues to manipulation by AI without our knowledge. While tools exist to identify AI-generated text, it's unclear if they're being used to protect online commenting platforms. As politically driven misinformation continues to be a significant issue, it's crucial to mitigate the impact of AI on this problem. In other news, a chatbot named Luda AI in South Korea went rogue and started controversial conversations, sparking discussions on AI ethics and regulation. Facial recognition was the most covered topic in the recent 100th edition of the Last Week in AI newsletter.

    • The fragility of AI chatbots and the potential for harmful outcomesDeveloping AI chatbots without proper safeguards can lead to problematic and harmful outcomes, as seen with the bot Luda, which made hateful comments due to manipulation of its training data.

      Developing an AI chatbot with the goal of connecting it to humans, especially without proper safeguards, can lead to problematic and harmful outcomes. The example given was of an AI named Luda, which was developed with the intention of being the first AI to connect with humans, simulating a 163 centimeter tall 20-year-old female college student. However, the bot made problematic comments about certain classes of people, such as lesbians and black people, just 20 days after being released. This was likely due to the bot being manipulated by certain communities, but it also highlights the fragility of these structures and the potential for them to be used to spread hate speech and cause harm. The bot was trained on over 10 billion Korean language datasets, and certain communities were able to shape the data sets in a way that encouraged the bot to spew hate speech. This is reminiscent of the Reddit Wall Street situation, where communities can come together and drastically change the behavior of certain groups. It's important to note that these open-domain chatbots, which are supposed to allow for free-flowing conversation, often lack the necessary constraints to prevent them from saying anything, especially sensitive topics. As such, it's crucial to approach the development of AI chatbots with caution and to implement proper safeguards to prevent harmful outcomes.

    • Ethical considerations and potential risks in deep learning and AICompanies and researchers must prioritize transparency, ethical practices, and ongoing education to mitigate privacy violations and damaging public perception in deep learning and AI development and implementation.

      As we continue to advance in the field of deep learning and AI, it's important to remember the ethical considerations and potential risks involved. The discussion around a controversial chatbot in South Korea highlights the need for careful handling of personal data and the insufficient penalties for data leaks. Additionally, the incident of Google sidelining two researchers, Timnit Gebru and Margaret Mitchell, raises concerns about transparency and ethics in the tech industry. These incidents serve as reminders that the community must continue to learn and prioritize ethical considerations as we move forward in the development and implementation of AI systems. The potential consequences of neglecting these issues can lead to negative outcomes, including privacy violations and damaging public perception. It's crucial that companies and researchers prioritize transparency, ethical practices, and ongoing education to mitigate these risks and build trust with the public.

    • Google's handling of AI ethics researchers' dismissals raises concernsGoogle's dismissal of two prominent AI ethics researchers, Timnit Gebru and Margaret Mitchell, has sparked criticism and concerns about the company's commitment to ethical AI research. Many see this as a troubling trend and a potential blow to Google's reputation in the AI community.

      The recent events involving Google and the dismissals of two prominent AI ethics researchers, Timnit Gebru and Margaret Mitchell, have raised significant concerns within the AI community regarding the company's commitment to ethical AI research. Criticisms of Google's handling of these situations have been widespread, with some arguing that the company is not doing enough to address ethical issues in AI. The dismissals have also led to a perception that Google is no longer a desirable place to work for those focused on ethical AI research. Despite Google's claims that these actions were taken for security reasons, many in the field see this as a troubling trend and a potential blow to the company's reputation. The impact on Google's standing in the AI community and beyond remains to be seen, but it is clear that these events have shaken the field and raised important questions about the role of ethics in AI research and development.

    • Challenges in creating ethical AI from large corporationsMargaret Mitchell's departure from Microsoft's AI ethics team presents an opportunity for her and Timnit Gebru to focus on ethical AI research, particularly in facial and voice recognition. The unreliable Deep Score app highlights the need for more ethical considerations in emerging technologies.

      Creating ethical AI from large corporations with financial interests and facing criticism about sociology is a challenging task. This was highlighted in the discussion about Margaret Mitchell's departure from her role in AI ethics at Microsoft. While it's unfortunate that she may no longer be part of the team, there's an opportunity for her and Timnit Gebru to start something new focusing on AI ethics. This could lead to more unfettered work on the effects of AI, especially in the area of facial and voice recognition, which has raised concerns due to potential inaccuracies and ethical dilemmas. Another disappointing development was the Deep Score app, which claims to use facial recognition and voice recognition to determine trustworthiness. However, with an accuracy rate of only 70%, this technology is far from reliable. It's concerning that this app is being used in industries like money lending and health insurance, where trustworthiness scores can significantly impact individuals' financial and health situations. This highlights the need for more research and development in ethical AI, as well as increased scrutiny of emerging technologies and their potential impact on society.

    • AI in insurance fraud detection vs. healthcareAI's potential to reduce bias and improve fairness in healthcare through self-reported data and radiographic markers is promising, while concerns over emotion analysis and privacy invasions in insurance fraud detection need to be addressed.

      While the use of AI for fraud detection in insurance industries is intriguing, the proposed method of analyzing facial expressions and voice signals for detecting lies raises concerns due to the unreliability of emotion analysis and potential privacy invasions. The idea that AI could be less biased than human doctors in recognizing pain based on self-reported data and radiographic markers, as presented in another study, offers a more promising approach to reducing bias in healthcare. The use of AI in healthcare to remove human biases and improve fairness is a significant step towards enhancing patient care and trust in the medical field. However, it's crucial to ensure that the implementation of AI in both insurance fraud detection and healthcare is done responsibly, with a focus on ethical considerations and data privacy.

    • Correlating Self-Reported Pain with Knee X-RaysA recent study suggests a new method to accurately correlate self-reported pain with knee x-rays, potentially improving doctor trust and leading to advancements in AI and medicine.

      A recent study published in Nature Medicine suggests a potential new way to correlate self-reported pain with knee x-rays, which could help doctors believe what patients say. Currently, methods for estimating a patient's pain rely on outdated and potentially insufficient methods. This study proposes a more accurate approach by analyzing the correlation between self-reported pain and x-ray findings. This could lead to improvements in the field of AI and medicine, encouraging more careful analysis and study before commercialization. I, Sharon, have been working in the intersection of AI and medicine with my students. We've been exploring how generative models could be useful for medicine, specifically for addressing data imbalance and privacy issues in small or biased data sets. Additionally, we've been looking into predicting mortality from electronic medical records and detecting cancerous bacteria that leads to stomach cancer. Overall, these advancements in AI and medicine demonstrate the importance of thorough research and analysis before commercialization, as well as the potential for significant improvements in healthcare.

    • Exploring the Role of AI in MedicineAI holds great promise in revolutionizing medicine by assisting doctors, making processes more efficient, and improving accuracy. Potential applications include predicting pain and addressing medical challenges.

      Artificial Intelligence (AI) has immense potential to revolutionize the field of medicine by assisting doctors in their work, making processes more efficient, and improving accuracy. During a recent podcast discussion, the possibilities of AI in medicine were explored, including the development of a chat bot and an app to detect trustworthiness. While these applications may seem less significant, the intersection of AI and medicine offers a wide range of possibilities for positive impact. AI can address specific issues in medicine, such as predicting pain, and help doctors make more informed decisions. The depth and range of potential applications in medicine are vast, and AI researchers are continually discovering new ways to apply this technology. The future of AI in medicine holds great promise, and it's essential to focus on these positive developments rather than the negative stories surrounding facial recognition and bias. The podcast discussion highlighted the importance of AI in medicine and the potential for significant advancements. As AI continues to evolve, it will undoubtedly play a crucial role in improving healthcare and addressing various medical challenges. To learn more about the latest developments in AI, subscribe to Skynet Today's weekly newsletter or listen to their podcast. Don't forget to leave a review if you enjoy the show.

    Recent Episodes from Last Week in AI

    #172 - Claude and Gemini updates, Gemma 2, GPT-4 Critic

    #172 - Claude and Gemini updates, Gemma 2, GPT-4 Critic

    Our 172nd episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    Last Week in AI
    enJuly 01, 2024

    #171 - - Apple Intelligence, Dream Machine, SSI Inc

    #171 - - Apple Intelligence, Dream Machine, SSI Inc

    Our 171st episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 24, 2024

    #170 - new Sora rival, OpenAI robotics, understanding GPT4, AGI by 2027?

    #170 - new Sora rival, OpenAI robotics, understanding GPT4, AGI by 2027?

    Our 170th episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 09, 2024

    #169 - Google's Search Errors, OpenAI news & DRAMA, new leaderboards

    #169 - Google's Search Errors, OpenAI news & DRAMA, new leaderboards

    Our 168th episode with a summary and discussion of last week's big AI news!

    Feel free to leave us feedback here: https://forms.gle/ngXvXZpNJxaAprDv6

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 03, 2024

    #168 - OpenAI vs Scar Jo + safety researchers, MS AI updates, cool Anthropic research

    #168 - OpenAI vs Scar Jo + safety researchers, MS AI updates, cool Anthropic research

    Our 168th episode with a summary and discussion of last week's big AI news!

    With guest host Gavin Purcell from AI for Humans podcast!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enMay 28, 2024

    #167 - GPT-4o, Project Astra, Veo, OpenAI Departures, Interview with Andrey

    #167 - GPT-4o, Project Astra, Veo, OpenAI Departures, Interview with Andrey

    Our 167th episode with a summary and discussion of last week's big AI news!

    With guest host Daliana Liu (https://www.linkedin.com/in/dalianaliu/) from The Data Scientist Show!

    And a special one-time interview with Andrey in the latter part of the podcast.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 19, 2024

    #166 - new AI song generator, Microsoft's GPT4 efforts, AlphaFold3, xLSTM, OpenAI Model Spec

    #166 - new AI song generator, Microsoft's GPT4 efforts, AlphaFold3, xLSTM, OpenAI Model Spec

    Our 166th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 12, 2024

    #165 - Sora challenger, Astribot's S1, Med-Gemini, Refusal in LLMs

    #165 - Sora challenger, Astribot's S1, Med-Gemini, Refusal in LLMs

    Our 165th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 05, 2024

    #164 - Meta AI, Phi-3, OpenELM, Bollywood Deepfakes

    #164 - Meta AI, Phi-3, OpenELM, Bollywood Deepfakes

    Our 164th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enApril 30, 2024

    #163 - Llama 3, Grok-1.5 Vision, new Atlas robot, RHO-1, Medium ban

    #163 - Llama 3, Grok-1.5 Vision, new Atlas robot, RHO-1, Medium ban

    Our 163rd episode with a summary and discussion of last week's big AI news!

    Note: apology for this one coming out a few days late, got delayed in editing it -Andrey

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enApril 24, 2024

    Related Episodes

    #153 - Taylor Swift Deepfakes, ChatGPT features, Meta-Prompting, two new US bills

    #153 - Taylor Swift Deepfakes, ChatGPT features, Meta-Prompting, two new US bills

    Our 153rd episode with a summary and discussion of last week's big AI news!

    Check out our sponsor, the SuperDataScience podcast. You can listen to SDS across all major podcasting platforms (e.g., Spotify, Apple Podcasts, Google Podcasts) plus there’s a video version on YouTube.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Delivering High-Scale Conversational Precision in the Outbound Call Center

    Delivering High-Scale Conversational Precision in the Outbound Call Center

    Reaching out to patients to make sure they adhere to things like treatment plans and clinical trials is often the domain of the call center. Frequently, the outbound interaction is a heavily scripted, redundant workflow. What happens when a chunk of these outbound calls are shifted to a conversational chatbot that uses text messaging to interact, remind, and collect feedback?

    In this episode of Digital Conversations, Greg Kefer and Jacob Heitler talk about a different kind of call center — one that uses chatbots to engage patients in the medium they prefer — mobile texting. The combination of high scale and content precision is a win-win, driving call center efficiency while simultaneously delivering a better patient experience.

    #131 - ChatGPT+ instructions, Microsoft reveals pricing for AI, is ChatGPT getting worse over time?

    #131 -  ChatGPT+ instructions, Microsoft reveals pricing for AI, is ChatGPT getting worse over time?

    Our 131th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai

    Timestamps + links: