Logo
    Search

    Dr. S. Craig Watkins on Why AI’s Potential to Combat or Scale Systemic Injustice Still Comes Down to Humans

    enApril 03, 2024

    Podcast Summary

    • Impact of Technology on Human Connection and CommunityProfessor Craig Watkins discussed the complexities of digital media and AI, touching on topics like living beyond human scale, the role of community, and social and behavioral impacts, particularly in relation to race and systemic inequality. AI can identify suicidal ideation, but seeking help is crucial.

      The exploration of the impact of technology on human connection and community in today's world. Brene Brown, the host, was deeply moved by her conversation with professor s Craig Watkins, who provided insightful perspectives on the complexities and implications of digital media and AI in our lives. The conversation touched upon various topics, including the possibilities and costs of living beyond human scale, the role of community and IRL relationships, and the social and behavioral impacts of technology, particularly in relation to race and systemic inequality. Craig's research covers a wide range of areas, from understanding the interactions between demographics, social, and environmental factors and the distribution of chronic disease and mental health disorders, to leading a team using a large publicly available health dataset from the National Institutes of Health to develop models to identify patterns and trends. The episode also touched upon the use of AI in identifying suicidal ideation and emphasized the importance of seeking help if needed, with resources such as the National Suicide and Crisis Lifeline available for those in immediate danger. Overall, this conversation offered valuable insights into the complexities of living in a world shaped by technology and the importance of human connection and community.

    • The importance of thoughtful and responsible AI integrationFailure to consider the ethical implications of AI integration in high-stakes environments like education, health care, and criminal justice can lead to serious consequences.

      The integration of AI systems into high-stakes environments like education, health care, and criminal justice without proper understanding, guardrails, policies, or ethical principles can have serious consequences. Craig Newmark, a native Texan and professor at the University of Texas at Austin, emphasized this point during our conversation. He shared his personal story of being raised with a strong emphasis on education and developing a deep interest in human behavior and technology. His academic journey led him to study the intersection of AI and human behavior. Currently, he is involved in projects at UT focusing on innovation and health, as well as collaborative efforts with MIT. His TEDx MIT talk is a must-watch for insights on this topic. Overall, Craig's message underscores the importance of approaching AI integration thoughtfully and responsibly.

    • Addressing ethical concerns in AI integration in healthcareThe Good Systems project focuses on both computational and human aspects of AI in healthcare, addressing fairness, scale, and human dimensions.

      The integration of artificial intelligence (AI) in various domains, including healthcare, raises significant ethical concerns related to fairness, scale, and human dimensions. The Good Systems project, which the speaker is a part of, aims to address these concerns by focusing on both the computational and human aspects of AI. The computational aspect involves working with large datasets to understand health disparities, while the human dimension emphasizes understanding the human concerns and aspirations for AI. The speaker's experience has shown that while organizations are eager to use AI to eliminate bias, there is a risk of scaling it instead. For instance, hiring algorithms have been shown to have gender and race bias. The speaker's work at MIT focused on understanding the complex and nuanced ways in which race and systemic discrimination intersect with AI. It's crucial to approach AI as a multidimensional problem that requires a deep understanding of both its technical and ethical implications.

    • Building fairer algorithms in high-stakes environmentsDevelopers are addressing historical biases in algorithms by creating race-unaware and race-aware models, but challenges remain in building truly race-neutral algorithms

      As we continue to develop and deploy algorithms in high-stakes environments like education, healthcare, and criminal justice, there is a risk of replicating and automating historical biases. Developers are recognizing this issue and are working on creating fairer models. One approach is to build racially unaware models, which strips them of any data or indicators related to race. However, recent research suggests that these models may still be able to identify race based on subtle features or proxies. This raises questions about the feasibility of building truly race-neutral algorithms. Another approach is to build race-aware models, which consider how race might impact the domain of interest. Ultimately, the challenge is to build algorithms that can perform their tasks fairly and accurately without replicating historical biases.

    • Multidisciplinary approach needed for addressing racial bias in AIExperts from social sciences, humanities, design, and ethics must join computational experts to address systemic racial bias in AI and machine learning, recognizing the complexity of these systems requires a multifaceted response.

      Effective solutions to address racial bias and discrimination in AI and machine learning require a multidisciplinary approach. Good intentions are not enough, as these issues are systemic and structural, not just interpersonal. To truly make a difference, experts from various fields, including social sciences, humanities, design, and ethics, need to be involved in the conversation. The complexity of these systems necessitates a complex and multifaceted response. For instance, a team at UT Austin, funded by the National Institutes of Health, is working on developing AI and machine learning models to understand the driving factors behind the increasing rates of suicide among young African Americans. This project recognizes the need for not just computational expertise, but also domain expertise in areas like structural inequality and mental health. By bringing together diverse perspectives and expertise, the team hopes to create a more comprehensive and impactful solution. This approach is becoming increasingly common in the field of AI and machine learning, as the limitations of a single-discipline approach are recognized.

    • Building AI with diverse populationsCollaborating with experts and stakeholders ensures AI is appropriate and inclusive, incorporating both quantitative and qualitative data for valuable insights.

      Building AI that serves and understands diverse populations requires collaboration between computational teams and experts from the field, including behavioral health specialists and community stakeholders. This approach, known as building AI "with" rather than "for" people, ensures that datasets are appropriate and provide sufficient insight into the unique experiences and challenges of the population. It also highlights the importance of incorporating both quantitative and qualitative data, such as stories and unstructured information, into AI research. This new standard in AI research is being embraced by funding bodies like the National Institutes of Health and National Science Foundation, who recognize the need for diverse perspectives and experiences in designing and deploying AI systems. As a social worker, this approach aligns with the importance of including people with lived experience in decision-making processes. By leveraging computational techniques to analyze unstructured data, researchers can gain valuable insights into the language, environmental, behavioral, and social support triggers that contribute to complex issues like suicide in diverse populations.

    • Studying unstructured qualitative data for mental health and social issuesProfessor Guy Galloway emphasizes the importance of studying unstructured data, like language and stories, to identify potential crises and intervene early. He also highlights the need to address complex, interconnected issues like systemic discrimination with a nuanced, interdisciplinary approach.

      Understanding the complexity of human communication, particularly in the context of mental health and social issues, can provide valuable insights. Guy Galloway, a professor and podcast host, emphasizes the importance of studying unstructured qualitative data, such as language and stories, to identify potential crises and intervene before they escalate. He also highlights the need to address the interconnectedness and complexity of systemic discrimination, including interpersonal, institutional, and structural racism. These issues are not easily understood through simple mathematical models and require a nuanced, interdisciplinary approach. Galloway's work demonstrates the power of explaining complex concepts in accessible ways and the potential of technology to help us better understand and address these critical issues.

    • Understanding interconnected social disparitiesBuilding AI systems that align with societal values and don't perpetuate biases requires a multidisciplinary approach and addressing root causes of social disparities

      Understanding the interconnectedness of various social disparities, such as health, employment, education, and housing, requires a complex and comprehensive dataset. While researchers have made progress in predictive models for specific domains, the challenge lies in understanding how these disparities influence each other and building models that can capture this complexity. Predictive policing is an example of this issue, as it can perpetuate historical biases and lead to unintended consequences that are misaligned with societal values. The "alignment problem" refers to the challenge of building AI systems that align with our values and do not lead to unintended consequences. It's important to recognize that these systems are not neutral and can perpetuate existing biases and disparities if not designed and implemented carefully. Ultimately, addressing these complex issues requires a multidisciplinary approach and a commitment to understanding and addressing the root causes of social disparities.

    • Defining fairness and alignment in AI systemsHistorical biases and discrimination can be embedded in AI systems if those with significant power and influence lack expertise or training. To ensure fairness and alignment, underrepresented communities must be involved in development and definition.

      The definition and understanding of alignment and fairness in AI systems are currently being shaped by those with significant power and influence, often without the necessary expertise or training. This can lead to systems that are aligned with historical biases and discrimination, rather than true fairness and equality. For example, hiring algorithms and risk assessment models have been shown to discriminate against certain groups based on historical patterns of human decision-making. However, simply automating and scaling these biased decisions does not make them any less problematic. To truly address these issues, it's essential to recognize the expertise and experiences of underrepresented communities and involve them in the development and definition of fair and aligned AI systems. It's important to remember that these systems are not intentionally broken, but rather working as designed based on historical biases. To prevent this, we must challenge those in power to redefine alignment and fairness in a more inclusive and equitable way. We cannot afford to wait 10 years and say that the algorithms are not broken, but working exactly as designed. Instead, we must actively work to redefine and implement fairness and alignment in AI systems from the ground up.

    • Implicit biases in AI development can lead to disparitiesRecognizing and addressing implicit biases in AI development is crucial for equitable outcomes, requiring intentional efforts and explicit data.

      While the intent behind creating advanced AI systems may not be to perpetuate social and economic disparities, the lack of diversity and lived experiences among those building and designing these systems can lead to implicit biases that significantly impact their performance and ability to address disparities. These biases can manifest in various ways, such as racial or gender bias in hiring processes, and can be difficult to identify and address without explicit data or a conscious effort to ask the right questions. The absence of intent does not diminish the harm caused by these biases, and it's essential for those building AI systems to recognize and address these issues to ensure equitable outcomes. The CEO example shared illustrates this challenge, as the company realized they had no way to understand the impact of racial bias on their hiring process, and needed to ask for explicit data to make meaningful changes.

    • Bias in Technology SystemsFailing to acknowledge and address biases in technology systems can limit the pool of viable candidates for certain populations and negatively impact company performance. It's crucial to shift focus towards augmenting human intelligence with technology, rather than replacing it, and ensure diverse voices are included in discussions.

      The lack of awareness and understanding of potential biases in technology systems, such as those used in hiring processes, can lead to significant problems and negative impacts, even if unintentional. These historical biases can diminish the quality and performance of companies by limiting the pool of viable candidates for certain populations. Without actively collecting and analyzing data related to race, gender, and other factors, companies may not even realize these issues are occurring. This is a pressing concern as AI and machine learning continue to scale in society, and it's crucial that we shift our focus towards augmenting human intelligence and capacity with technology, rather than replacing it. The consequences of getting it wrong can be severe, as seen in instances of incorrect identification and arrests. We must engage in open conversations about the role of AI in our organizations and society as a whole, and ensure that diverse voices are included in the discussion.

    • Preventing automated bias in AI developmentTo prevent automated bias in AI, it's essential to bring diverse voices and perspectives to the table during development and ensure that models are not reinforcing existing biases. Human oversight and expertise are crucial to correct mistakes and ensure ethical decision-making.

      As we continue to develop and rely on artificial intelligence and automation, it's crucial to bring diverse voices and perspectives to the table to prevent automated bias and ensure we're moving towards augmentation rather than automation. The example of flawed facial recognition systems highlights how biases can be baked into models during development, leading to incorrect results. However, it's not just the systems that get it wrong – humans can also make mistakes in enforcing the output. The increasing reliance on machines for decision-making can undermine human expertise, compassion, and common sense. The fear and unease around AI, as expressed by various stakeholders, underscores the importance of addressing these issues and fostering a more inclusive and thoughtful approach to AI development.

    • Balancing AI in high-stakes environmentsEnsure diverse voices, expertise, and experiences are involved in AI implementation in high-stakes environments to promote positive and equitable impact.

      The rapid advancement of technology, particularly AI, in high-stakes environments like education, healthcare, and criminal justice, without proper understanding, guardrails, policies, and ethical principles, can lead to uncertainty, confusion, and potential harm. The distribution of power is currently imbalanced, and the fear of being irrelevant drives some to overlook the importance of domain expertise, lived experiences, and multidisciplinary teams. The next decade may bring significant challenges as policy and protections struggle to keep pace with technology. To mitigate these risks, it's crucial to involve diverse voices, expertise, and experiences in the conversation and decision-making processes. The goal is to ensure that the impact of AI-based solutions is positive and equitable for all.

    • Embracing vulnerability with courage and strengthCourageously face vulnerability, appreciate unique experiences, and tap into inner strengths to unleash hidden talents

      Vulnerability, on a personal level, requires immense courage and strength, even if it feels scary and uncertain. Craig Watkins, in our discussion, shared how prayer and summoning inner strength can help in these moments. He also emphasized the importance of embracing and appreciating unique experiences and talents, as seen in his admiration for Donald Glover's work. From his favorite TV shows and movies to music and meals, Watkins values the underdog stories and the potential they hold. Ultimately, he encourages us to tap into our inner strengths and unleash our hidden talents, just like the characters in Good Will Hunting.

    • Personal experiences and emotions add depth to a meal and lifeAppreciating simple moments and values instilled by loved ones contribute to success and happiness, while ethical policies and the right people are crucial in the development of AI.

      The combination of tomatoes, coconut milk, fresh fish, red peppers, onions, paprika, and cumin creates a delightful meal, but the unique touch that makes it special comes from personal experiences and emotions, which cannot be replicated by AI. The speaker finds joy in simple moments, such as going on walks, listening to music or podcasts, and appreciating the environment around him. He is deeply grateful for his mother's influence in his life, which instilled in him the confidence and values that led him to think big and strive for success. The potential of AI is vast, but it is crucial to ensure that ethical and socially just policies are in place and that the right people are involved to mitigate potential risks. The speaker's work, as discussed in the MIT TED talk, is essential for understanding and navigating the complexities of social media and building a world that aligns with our values.

    • Exploring the Future of WorkThis series covers challenges and opportunities in remote work, work-life balance, and more, offering valuable insights for the future of work.

      The "Future of Work" special series on the Prop g podcast, hosted by Scott Galloway, offers valuable insights on various aspects of the future of work. This series, produced by Brene Brown Education and Research Group and sponsored by Canva, covers topics ranging from the challenges of remote work to managing work-life balance during major opportunities. Listeners can expect a mix of provocative and technical insights that they won't want to miss. To stay updated and listen to new episodes, follow the Prop g podcast on your favorite podcast app or visit podcast.voxmedia.com for more award-winning shows.

    Recent Episodes from Unlocking Us with Brené Brown

    Futurist Amy Webb on What's Coming (and What's Here)

    Futurist Amy Webb on What's Coming (and What's Here)
    Quantitative futurist Amy Webb talks to us about the three technologies that make up the "super cycle" that we're all living through right now: artificial intelligence, wearable devices, and biotechnology, and why, despite the unnerving change, we still need to do some serious future planning.  Learn more about your ad choices. Visit podcastchoices.com/adchoices

    New York Times Journalists Jennifer Valentino-DeVries and Michael H. Keller on "A Marketplace of Girl Influencers Managed by Moms and Stalked by Men"

    New York Times Journalists Jennifer Valentino-DeVries and Michael H. Keller on "A Marketplace of Girl Influencers Managed by Moms and Stalked by Men"
    Brené interviews New York Times journalists Jennifer Valentino-DeVries and Michael H. Keller, who talk about their investigation into girl influencers and what's driving the larger influencer culture across social media. This is the fourth episode in our series on the possibilities and costs of living beyond human scale. Please note: As part of this conversation, we talk about the pervasive sexualization of young girl influencers (and girls in general) and the predatory nature of the comments they receive online. Learn more about your ad choices. Visit podcastchoices.com/adchoices

    Dr. S. Craig Watkins on Why AI’s Potential to Combat or Scale Systemic Injustice Still Comes Down to Humans

    Dr. S. Craig Watkins on Why AI’s Potential to Combat or Scale Systemic Injustice Still Comes Down to Humans
    In this episode, Brené and Craig discuss what is known in the AI community as the “alignment problem” — who needs to be at the table in order to build systems that are aligned with our values as a democratic society? And, when we start unleashing these systems in high stakes environments like education, healthcare, and criminal justice, what guardrails, policies, and ethical principles do we need to make sure that we’re not scaling injustice? This is the third episode in our series on the possibilities and costs of living beyond human scale, and it is a must-listen!  Please note: In this podcast, Dr. Watkins and Brené talk about how AI is being used across healthcare. One topic discussed is how AI is being used to identify suicidal ideation. If you or a loved one is in immediate danger, please call or text the National Suicide & Crisis Lifeline at 988 (24/7 in the US). If calling 911 or the police in your area, it is important to notify the operator that it is a psychiatric emergency and ask for police officers trained in crisis intervention or trained to assist people experiencing a psychiatric emergency. Learn more about your ad choices. Visit podcastchoices.com/adchoices

    Dr. William Brady on Social Media, Moral Outrage and Polarization

    Dr. William Brady on Social Media, Moral Outrage and Polarization
    This is the second episode in our series on the possibilities and costs of living beyond human scale. In this episode, Brené and William discuss group behavior on social media and how we show up with each other online versus offline. We’ll also learn about the specific types of content that fuel algorithms to amplify moral outrage and how they tie to our search for belonging. Learn more about your ad choices. Visit podcastchoices.com/adchoices

    Esther Perel on New AI – Artificial Intimacy

    Esther Perel on New AI – Artificial Intimacy
    In this first episode in a series on the possibilities and costs of living beyond human scale, Brené and Esther Perel discuss how we manage the paradox of exploring the world of social media and emerging technologies while staying tethered to our humanness. How do we create IRL relationships where we see and value others and feel seen and valued in the context of constant scrolling and using digital technology as armor? Learn more about your ad choices. Visit podcastchoices.com/adchoices

    Khaled Elgindy on his book: Blind Spot: America and the Palestinians, from Balfour to Trump

    Khaled Elgindy on his book: Blind Spot: America and the Palestinians, from Balfour to Trump
    Khaled Elgindy is a senior fellow at the Middle East Institute where he also directs MEI’s Program on Palestine and Israeli-Palestinian Affairs. He is the author of the book, Blind Spot: America and the Palestinians, from Balfour to Trump. In this episode we talk about the internal political struggles among Palestinian leadership and the US’s involvement in the failed peace agreements between Israel and Palestine. Learn more about your ad choices. Visit podcastchoices.com/adchoices

    Rula Daood and Alon-Lee Green on Standing Together

    Rula Daood and Alon-Lee Green on Standing Together
    Standing Together is a grassroots movement mobilizing Jewish and Palestinian citizens of Israel in pursuit of peace, equality, social, and climate justice. In this podcast, we talk to National co-director Rula Daood and Founding National co-director Alon-Lee Green on what it means to build a movement, to organize people, and what it means to build political will to end the occupation and create equity for all people. Learn more about your ad choices. Visit podcastchoices.com/adchoices

    Ali Abu Awwad and Robi Damelin on Nonviolence as The Path to Freedom for Palestinians and Israelis

    Ali Abu Awwad and Robi Damelin on Nonviolence as The Path to Freedom for Palestinians and Israelis
    The Parents Circle – Families Forum (PCFF) is a joint Israeli-Palestinian organization of over 600 families, all of whom have lost an immediate family member to the ongoing conflict. In this podcast, we talk to their spokesperson and bereaved mother, Robi Damelin, and Ali Abu Awwad. Ali was imprisoned by Israel for his resistance, bereaved of his brother by a soldier’s gun, and is the founding leader of Taghyeer, a nonviolent movement for social and political change. Learn more about your ad choices. Visit podcastchoices.com/adchoices

    Related Episodes

    Artificial intelligence and insurance, part 2: Rise of the machine-learning models

    Artificial intelligence and insurance, part 2: Rise of the machine-learning models

    In our second Critical Point episode about AI applications in insurance, we drill down into the topic of machine learning and particularly its evolving uses in healthcare. Milliman Principal and Consulting Actuary Robert Eaton leads a conversation with fellow data science leaders about the models they use, the challenges of data accessibility and quality, and working with regulators to ensure fairness. They also pick sides in the great debate of Team Stochastic Parrot versus Team Sparks AGI. 

    You can read the episode transcript on our website.

    To PhD or not to PhD, AI Bias, Facial Recognition Ethics, GPT-3

    To PhD or not to PhD, AI Bias, Facial Recognition Ethics, GPT-3

    This week:

    0:00 - 0:40 Intro 0:40 - 5:00 News Summary segment 5:00 News Discussion segment

    Find this and more in our text version of this news roundup:  https://lastweekin.ai/p/93

    Music: Deliberate Thought, Inspired by Kevin MacLeod (incompetech.com)

    Bias in Twitter & Zoom, LAPD Facial Recognition, GPT-3 Exclusivity

    Bias in Twitter & Zoom, LAPD Facial Recognition, GPT-3 Exclusivity

    Our latest episode with a summary and discussion of last week's big AI news!

    This week Twitter and Zoom’s algorithmic bias issuesDespite past denials, LAPD has used facial recognition software 30,000 times in last decade, records show, We’re not ready for AI, says the winner of a new $1m AI prize, How humane is the UK’s plan to introduce robot companions in care homes?, OpenAI is giving Microsoft exclusive access to its GPT-3 language model

    0:00 - 0:40 Intro 0:40 - 5:00 News Summary segment 5:00 News Discussion segment

    Find this and more in our text version of this news roundup: https://www.skynettoday.com/digests/the-eighty-fourth

    Music: Deliberate Thought, Inspired by Kevin MacLeod (incompetech.com)

    Künstliche Intelligenz – der Weg in die Zukunft?

    Künstliche Intelligenz – der Weg in die Zukunft?
    „Künstliche Intelligenz“, kurz KI, gilt als wichtige Zukunftstechnologie. Doch was genau ist eigentlich gemeint, wenn wir von Machine Learning oder Deep Learning sprechen? Welche Herausforderungen lassen sich mit KI schneller oder besser lösen? Und wo liegen die Grenzen von KI? Im Gespräch mit Vanessa Cann, Co-Vorsitzende des KI-Bundesverbands, und Prof. Dr. Kristian Kersting, KI-Forscher an der Technischen Universität Darmstadt, gehen wir diesen und weiteren Fragen nach. Unsere Gäste sprechen über konkrete Potenziale von künstlicher Intelligenz für Unternehmen, diskutieren aber auch über die Grenzen von KI-Lösungen und mögliche Wechselwirkungen mit globalen Themen wie dem Klimaschutz.