Logo
    Search

    Podcast Summary

    • Building resilience against AI attacks in democraciesDemocracies can strengthen their defenses against AI manipulation by implementing measures like verified phone numbers, prebunking, backup systems, and paper ballots. Understanding the threat and focusing on actors, behaviors, and content helps maintain election integrity.

      Democracies can become more resilient against AI attacks by implementing upgrades such as verified phone numbers, prebunking, backup systems, and paper ballots for elections. These measures help build antibodies or inoculations in people's minds, making them less susceptible to deepfake videos and other forms of information manipulation. Taiwan serves as a prime example of this, having successfully navigated potential AI interference in its recent presidential election through prebunking and other measures. Audrey Tang, Taiwan's Minister of Digital Affairs, emphasizes the importance of understanding the threat and building systems for verification and content provenance. By focusing on the actors, behaviors, and content of potential threats, democracies can better protect themselves against information manipulation and maintain the integrity of their elections. Taiwan's approach, which includes a single number for all governmental SMS messages, is an effective large-scale solution to this issue.

    • Trust and Security in Digital Communication with Short CodesShort codes ensure trust and security in digital communication, while fact-checking through collaborative efforts can combat disinformation. However, new forms of deepfakes and precision persuasion attacks may emerge, requiring ongoing adaptation.

      The use of short codes for SMS communications is a crucial aspect of trust and security in digital communication. These short codes, represented by the number 111, are non-forgeable and create a "blue check mark" of trustworthiness. Telecom companies and other organizations are increasingly using short codes for their messages. This creates two classes of senders: those that are guaranteed to be trustworthy, and those that require personal verification. Another key takeaway is the importance of collaborative fact-checking in combating disinformation. With real-time sampling and crowdsourcing, fact-checking can be done more efficiently and effectively, allowing for quick responses to viral disinformation. However, it's important to note that as technology advances, new forms of deepfakes and precision persuasion attacks may emerge. These attacks may not rely on share or repost buttons, but instead on direct messaging and individualized communication. On the positive side, the same technology can also be used to enhance deliberative polling, allowing individuals to set an agenda, speak their minds, and share personal anecdotes, providing valuable insights for policy makers. Overall, the use of technology for communication and fact-checking presents both challenges and opportunities, and it's important to stay informed and adapt to new developments.

    • Building Meaningful Community RelationshipsInstead of preventing AI manipulation through individual relationships, focus on fostering community relationships based on shared values and interests using deliberative polling technology to promote consensus and engagement.

      As technology advances, the potential for AI to manipulate individuals through intimate, long-term online relationships becomes a significant concern. However, instead of focusing solely on preventing these attacks, it's crucial to foster meaningful community relationships where individuals can connect based on shared values and interests. Deliberative polling technology, which uses language model analysis to bring together diverse perspectives, can help facilitate these communities. By focusing on building relationships and fostering consensus, we can mitigate the influence of fake relationships and promote a more engaged, informed population. This approach not only improves the voting process but also addresses the fundamental issue of loneliness that can lead to totalitarianism, as Hannah Arendt noted. Ultimately, it's essential to shift the focus from individual relationships to community relationships, where individuals can reflect on their values and connect with others in a meaningful way.

    • Impact of cyber attacks and info manipulation on societyCyber attacks and info manipulation can create fear, uncertainty, and doubt, potentially polarizing society. Taiwan responded to such attacks during Pelosi's visit with quick action and deliberative polling to bridge polarizations and build deeper connections.

      Cyber attacks and information manipulation can deeply impact a society, creating fear, uncertainty, and doubt, and potentially polarizing the population. This was evident during the 2022 incident when China targeted Taiwan with cyber attacks and information manipulation during House Speaker Nancy Pelosi's visit. The attacks overwhelmed various websites and caused panic, but Taiwan quickly responded and managed to prevent significant damage. However, the real battle was in the minds of the people, as the attacks aimed to create division and distrust. To counteract this, Taiwan has focused on bridging polarizations through initiatives like deliberative polling, where diverse groups come together to discuss issues and provide input to policy makers. This approach not only helps to address the disconnect between citizens' preferences and passed policies, but also builds deeper connections and understanding among people.

    • Everyday citizens' preferences have less impact on government agendas than economic elites and special interest groupsDeliberative polling, a method that brings nuanced statements to the forefront, can help bridge the gap between citizen preferences and government agendas, but concerns about selection bias and accessibility remain.

      Every day citizens' preferences may not significantly influence government agendas, but there is a correlation between the preferences of economic elites and special interest groups. This leads to low trust in institutions and a need for solutions like deliberative polling. Deliberative polling, as demonstrated through the use of the platform Polis during the Uber controversy in Taiwan, brings nuanced statements to the forefront and allows for a complete survey of middle-of-the-road solutions. This method has also been applied to tuning AIs, resulting in fairer and less discriminatory versions. However, there is a concern about selection bias and accessibility. In Taiwan, where broadband is affordable and a human right, efforts are made to include a diverse range of voices. Polis, along with other platforms like the new petition system on x.com, share a fundamental design without a reply button to encourage thoughtful and nuanced conversation.

    • Using tech to bridge ideological gapsThe 'bridging bonuses algorithm' rewards those who build connections between groups, contrasting with incentives for disinformation spreaders. Practical apps include paper ballots and citizen-recorded counting for election transparency and accountability.

      Technology can be used to bridge ideological gaps and promote unity instead of causing division. The "bridging bonuses algorithm" is a system that rewards those who build connections between different groups by making the process of bridging gaps more visible and gamified. This approach contrasts with the incentives for those who spread disinformation and inflame cultural thought lines. A practical application of this concept is the use of paper ballots and citizen-recorded counting in elections to create a shared reality and prevent disinformation attacks. By trusting citizens and utilizing technology defensively, democratic institutions can build resilience against election fraud and manipulation. The use of high-definition video recording and broadband technology allows for transparency and accountability, ensuring that every vote is counted accurately and publicly. This approach not only strengthens democratic institutions but also encourages a more unified and trusting society.

    • Investing in AI safety for societal benefitsTaiwan is prioritizing AI safety through establishment of an AI Evaluation Center and international cooperation to ensure a 'race to safety' and prevent potential risks from becoming catastrophic.

      Investing in AI safety is crucial for societies to prevent isolation and capture by persuasive bots. Taiwan, with its significant role in the production of GPUs for AI, recognizes this responsibility and is making strides in AI safety through various measures, including the establishment of an AI Evaluation Center and international cooperation. The goal is to ensure a "race to safety" rather than a race to increase AI capabilities as quickly as possible. The potential risks of AI are compared to walking on a thin ice sheet, and the importance of designing liability frameworks and defensive countermeasures when harm is discovered cannot be overstated. The challenge lies in identifying the critical point where more drastic measures may be necessary, and continuous horizon scanning and international cooperation are essential to address potential emergencies before they escalate. The historical example of the ozone layer depletion serves as a reminder of the importance of addressing potential risks before they become catastrophic.

    • Addressing the risks of AGI through international cooperation and future-proofing democraciesUrgent international cooperation and agreements are necessary to mitigate the risks of AGI, and upgrading democracies to keep pace with AI is crucial for a safer future.

      The development and implementation of Artificial General Intelligence (AGI) pose significant risks to society, and urgent international cooperation and agreements are necessary to mitigate these risks. The Montreal Protocol serves as a model for addressing specific harms that AGI could bring, but the ambiguity of where danger lies and the various types of risks involved make it crucial to foster a broader sense of prudence and caution. Future-proofing democracies requires the capacity of governance to scale with AI and the collective intelligence of humans to keep pace. The work of individuals like Audrey Tang, who are focusing on upgrading democracies and considering the implications of the AI race, is essential in creating a safer future. The Center For Humane Technology, through their podcast and non-profit work, is dedicated to catalyzing a humane future by keeping these important issues at the forefront of the conversation.

    • Sharing Knowledge and ResourcesCollaboration and generosity are key to accessing valuable information and growing together in our interconnected world. Share podcasts, leave positive reviews, and support community efforts to help others discover valuable content.

      Importance of sharing knowledge and resources. The speakers emphasized the value of making information accessible to others, whether it's through podcasts, transcripts, or other means. They also highlighted the significance of community support, such as leaving positive reviews or ratings to help others discover valuable content. Overall, the conversation underscored the importance of collaboration and generosity in our interconnected world. So, let's keep sharing, learning, and growing together! And if you enjoyed this podcast, don't forget to rate it on Apple Podcasts to help spread the word.

    Recent Episodes from Your Undivided Attention

    Why Are Migrants Becoming AI Test Subjects? With Petra Molnar

    Why Are Migrants Becoming AI Test Subjects? With Petra Molnar

    Climate change, political instability, hunger. These are just some of the forces behind an unprecedented refugee crisis that’s expected to include over a billion people by 2050. In response to this growing crisis, wealthy governments like the US and the EU are employing novel AI and surveillance technologies to slow the influx of migrants at their borders. But will this rollout stop at the border?

    In this episode, Tristan and Aza sit down with Petra Molnar to discuss how borders have become a proving ground for the sharpest edges of technology, and especially AI. Petra is an immigration lawyer and co-creator of the Migration and Technology Monitor. Her new book is “The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence.”

    RECOMMENDED MEDIA

    The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence

    Petra’s newly published book on the rollout of high risk tech at the border.

    Bots at the Gate

    A report co-authored by Petra about Canada’s use of AI technology in their immigration process.

    Technological Testing Grounds

    A report authored by Petra about the use of experimental technology in EU border enforcement.

    Startup Pitched Tasing Migrants from Drones, Video Reveals

    An article from The Intercept, containing the demo for Brinc’s taser drone pilot program.

    The UNHCR

    Information about the global refugee crisis from the UN.

    RECOMMENDED YUA EPISODES

    War is a Laboratory for AI with Paul Scharre

    No One is Immune to AI Harms with Dr. Joy Buolamwini

    Can We Govern AI? With Marietje Schaake

    CLARIFICATION:

    The iBorderCtrl project referenced in this episode was a pilot project that was discontinued in 2019

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    This week, a group of current and former employees from Open AI and Google Deepmind penned an open letter accusing the industry’s leading companies of prioritizing profits over safety. This comes after a spate of high profile departures from OpenAI, including co-founder Ilya Sutskever and senior researcher Jan Leike, as well as reports that OpenAI has gone to great lengths to silence would-be whistleblowers. 

    The writers of the open letter argue that researchers have a “right to warn” the public about AI risks and laid out a series of principles that would protect that right. In this episode, we sit down with one of those writers: William Saunders, who left his job as a research engineer at OpenAI in February. William is now breaking the silence on what he saw at OpenAI that compelled him to leave the company and to put his name to this letter. 

    RECOMMENDED MEDIA 

    The Right to Warn Open Letter 

    My Perspective On "A Right to Warn about Advanced Artificial Intelligence": A follow-up from William about the letter

     Leaked OpenAI documents reveal aggressive tactics toward former employees: An investigation by Vox into OpenAI’s policy of non-disparagement.

    RECOMMENDED YUA EPISODES

    1. A First Step Toward AI Regulation with Tom Wheeler 
    2. Spotlight on AI: What Would It Take For This to Go Well? 
    3. Big Food, Big Tech and Big AI with Michael Moss 
    4. Can We Govern AI? with Marietje Schaake

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    War is a Laboratory for AI with Paul Scharre

    War is a Laboratory for AI with Paul Scharre

    Right now, militaries around the globe are investing heavily in the use of AI weapons and drones.  From Ukraine to Gaza, weapons systems with increasing levels of autonomy are being used to kill people and destroy infrastructure and the development of fully autonomous weapons shows little signs of slowing down. What does this mean for the future of warfare? What safeguards can we put up around these systems? And is this runaway trend toward autonomous warfare inevitable or will nations come together and choose a different path? In this episode, Tristan and Daniel sit down with Paul Scharre to try to answer some of these questions. Paul is a former Army Ranger, the author of two books on autonomous weapons and he helped the Department of Defense write a lot of its policy on the use of AI in weaponry. 

    RECOMMENDED MEDIA

    Four Battlegrounds: Power in the Age of Artificial Intelligence: Paul’s book on the future of AI in war, which came out in 2023.

    Army of None: Autonomous Weapons and the Future of War: Paul’s 2018 book documenting and predicting the rise of autonomous and semi-autonomous weapons as part of modern warfare.

    The Perilous Coming Age of AI Warfare: How to Limit the Threat of Autonomous Warfare: Paul’s article in Foreign Affairs based on his recent trip to the battlefield in Ukraine.

    The night the world almost almost ended: A BBC documentary about Stanislav Petrov’s decision not to start nuclear war.

    AlphaDogfight Trials Final Event: The full simulated dogfight between an AI and human pilot. The AI pilot swept, 5-0.

    RECOMMENDED YUA EPISODES

    1. The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen Hao
    2. Can We Govern AI? with Marietje Schaake
    3. Big Food, Big Tech and Big AI with Michael Moss
    4. The Invisible Cyber-War with Nicole Perlroth

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    AI and Jobs: How to Make AI Work With Us, Not Against Us With Daron Acemoglu

    AI and Jobs: How to Make AI Work With Us, Not Against Us With Daron Acemoglu

    Tech companies say that AI will lead to massive economic productivity gains. But as we know from the first digital revolution, that’s not what happened. Can we do better this time around?

    RECOMMENDED MEDIA

    Power and Progress by Daron Acemoglu and Simon Johnson Professor Acemoglu co-authored a bold reinterpretation of economics and history that will fundamentally change how you see the world

    Can we Have Pro-Worker AI? Professor Acemoglu co-authored this paper about redirecting AI development onto the human-complementary path

    Rethinking Capitalism: In Conversation with Daron Acemoglu The Wheeler Institute for Business and Development hosted Professor Acemoglu to examine how technology affects the distribution and growth of resources while being shaped by economic and social incentives

    RECOMMENDED YUA EPISODES

    1. The Three Rules of Humane Tech
    2. The Tech We Need for 21st Century Democracy
    3. Can We Govern AI?
    4. An Alternative to Silicon Valley Unicorns

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Jonathan Haidt On How to Solve the Teen Mental Health Crisis

    Jonathan Haidt On How to Solve the Teen Mental Health Crisis

    Suicides. Self harm. Depression and anxiety. The toll of a social media-addicted, phone-based childhood has never been more stark. It can be easy for teens, parents and schools to feel like they’re trapped by it all. But in this conversation with Tristan Harris, author and social psychologist Jonathan Haidt makes the case that the conditions that led to today’s teenage mental health crisis can be turned around – with specific, achievable actions we all can take starting today.

    This episode was recorded live at the San Francisco Commonwealth Club.  

    Correction: Tristan mentions that 40 Attorneys General have filed a lawsuit against Meta for allegedly fostering addiction among children and teens through their products. However, the actual number is 42 Attorneys General who are taking legal action against Meta.

    Clarification: Jonathan refers to the Wait Until 8th pledge. By signing the pledge, a parent  promises not to give their child a smartphone until at least the end of 8th grade. The pledge becomes active once at least ten other families from their child’s grade pledge the same.

    Chips Are the Future of AI. They’re Also Incredibly Vulnerable. With Chris Miller

    Chips Are the Future of AI. They’re Also Incredibly Vulnerable. With Chris Miller

    Beneath the race to train and release more powerful AI models lies another race: a race by companies and nation-states to secure the hardware to make sure they win AI supremacy. 

    Correction: The latest available Nvidia chip is the Hopper H100 GPU, which has 80 billion transistors. Since the first commercially available chip had four transistors, the Hopper actually has 20 billion times that number. Nvidia recently announced the Blackwell, which boasts 208 billion transistors - but it won’t ship until later this year.

    RECOMMENDED MEDIA 

    Chip War: The Fight For the World’s Most Critical Technology by Chris Miller

    To make sense of the current state of politics, economics, and technology, we must first understand the vital role played by chips

    Gordon Moore Biography & Facts

    Gordon Moore, the Intel co-founder behind Moore's Law, passed away in March of 2023

    AI’s most popular chipmaker Nvidia is trying to use AI to design chips faster

    Nvidia's GPUs are in high demand - and the company is using AI to accelerate chip production

    RECOMMENDED YUA EPISODES

    Future-proofing Democracy In the Age of AI with Audrey Tang

    How Will AI Affect the 2024 Elections? with Renee DiResta and Carl Miller

    The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen Hao

    Protecting Our Freedom of Thought with Nita Farahany

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

     

     

    Future-proofing Democracy In the Age of AI with Audrey Tang

    Future-proofing Democracy In the Age of AI with Audrey Tang

    What does a functioning democracy look like in the age of artificial intelligence? Could AI even be used to help a democracy flourish? Just in time for election season, Taiwan’s Minister of Digital Affairs Audrey Tang returns to the podcast to discuss healthy information ecosystems, resilience to cyberattacks, how to “prebunk” deepfakes, and more. 

    RECOMMENDED MEDIA 

    Testing Theories of American Politics: Elites, Interest Groups, and Average Citizens by Martin Gilens and Benjamin I. Page

    This academic paper addresses tough questions for Americans: Who governs? Who really rules? 

    Recursive Public

    Recursive Public is an experiment in identifying areas of consensus and disagreement among the international AI community, policymakers, and the general public on key questions of governance

    A Strong Democracy is a Digital Democracy

    Audrey Tang’s 2019 op-ed for The New York Times

    The Frontiers of Digital Democracy

    Nathan Gardels interviews Audrey Tang in Noema

    RECOMMENDED YUA EPISODES 

    Digital Democracy is Within Reach with Audrey Tang

    The Tech We Need for 21st Century Democracy with Divya Siddarth

    How Will AI Affect the 2024 Elections? with Renee DiResta and Carl Miller

    The AI Dilemma

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    U.S. Senators Grilled Social Media CEOs. Will Anything Change?

    U.S. Senators Grilled Social Media CEOs. Will Anything Change?

    Was it political progress, or just political theater? The recent Senate hearing with social media CEOs led to astonishing moments — including Mark Zuckerberg’s public apology to families who lost children following social media abuse. Our panel of experts, including Facebook whistleblower Frances Haugen, untangles the explosive hearing, and offers a look ahead, as well. How will this hearing impact protocol within these social media companies? How will it impact legislation? In short: will anything change?

    Clarification: Julie says that shortly after the hearing, Meta’s stock price had the biggest increase of any company in the stock market’s history. It was the biggest one-day gain by any company in Wall Street history.

    Correction: Frances says it takes Snap three or four minutes to take down exploitative content. In Snap's most recent transparency report, they list six minutes as the median turnaround time to remove exploitative content.

    RECOMMENDED MEDIA 

    Get Media Savvy

    Founded by Julie Scelfo, Get Media Savvy is a non-profit initiative working to establish a healthy media environment for kids and families

    The Power of One by Frances Haugen

    The inside story of France’s quest to bring transparency and accountability to Big Tech

    RECOMMENDED YUA EPISODES

    Real Social Media Solutions, Now with Frances Haugen

    A Conversation with Facebook Whistleblower Frances Haugen

    Are the Kids Alright?

    Social Media Victims Lawyer Up with Laura Marquez-Garrett

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

     

     

    Taylor Swift is Not Alone: The Deepfake Nightmare Sweeping the Internet

    Taylor Swift is Not Alone: The Deepfake Nightmare Sweeping the Internet

    Over the past year, a tsunami of apps that digitally strip the clothes off real people has hit the market. Now anyone can create fake non-consensual sexual images in just a few clicks. With cases proliferating in high schools, guest presenter Laurie Segall talks to legal scholar Mary Anne Franks about the AI-enabled rise in deep fake porn and what we can do about it. 

    Correction: Laurie refers to the app 'Clothes Off.' It’s actually named Clothoff. There are many clothes remover apps in this category.

    RECOMMENDED MEDIA 

    Revenge Porn: The Cyberwar Against Women

    In a five-part digital series, Laurie Segall uncovers a disturbing internet trend: the rise of revenge porn

    The Cult of the Constitution

    In this provocative book, Mary Anne Franks examines the thin line between constitutional fidelity and constitutional fundamentalism

    Fake Explicit Taylor Swift Images Swamp Social Media

    Calls to protect women and crack down on the platforms and technology that spread such images have been reignited

    RECOMMENDED YUA EPISODES 

    No One is Immune to AI Harms

    Esther Perel on Artificial Intimacy

    Social Media Victims Lawyer Up

    The AI Dilemma

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Can Myth Teach Us Anything About the Race to Build Artificial General Intelligence? With Josh Schrei

    Can Myth Teach Us Anything About the Race to Build Artificial General Intelligence? With Josh Schrei

    We usually talk about tech in terms of economics or policy, but the casual language tech leaders often use to describe AI — summoning an inanimate force with the powers of code — sounds more... magical. So, what can myth and magic teach us about the AI race? Josh Schrei, mythologist and host of The Emerald podcast,  says that foundational cultural tales like "The Sorcerer's Apprentice" or Prometheus teach us the importance of initiation, responsibility, human knowledge, and care.  He argues these stories and myths can guide ethical tech development by reminding us what it is to be human. 

    Correction: Josh says the first telling of "The Sorcerer’s Apprentice" myth dates back to ancient Egypt, but it actually dates back to ancient Greece.

    RECOMMENDED MEDIA 

    The Emerald podcast

    The Emerald explores the human experience through a vibrant lens of myth, story, and imagination

    Embodied Ethics in The Age of AI

    A five-part course with The Emerald podcast’s Josh Schrei and School of Wise Innovation’s Andrew Dunn

    Nature Nurture: Children Can Become Stewards of Our Delicate Planet

    A U.S. Department of the Interior study found that the average American kid can identify hundreds of corporate logos but not plants and animals

    The New Fire

    AI is revolutionizing the world - here's how democracies can come out on top. This upcoming book was authored by an architect of President Biden's AI executive order

    RECOMMENDED YUA EPISODES 

    How Will AI Affect the 2024 Elections?

    The AI Dilemma

    The Three Rules of Humane Tech

    AI Myths and Misconceptions

     

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Related Episodes

    Chips Are the Future of AI. They’re Also Incredibly Vulnerable. With Chris Miller

    Chips Are the Future of AI. They’re Also Incredibly Vulnerable. With Chris Miller

    Beneath the race to train and release more powerful AI models lies another race: a race by companies and nation-states to secure the hardware to make sure they win AI supremacy. 

    Correction: The latest available Nvidia chip is the Hopper H100 GPU, which has 80 billion transistors. Since the first commercially available chip had four transistors, the Hopper actually has 20 billion times that number. Nvidia recently announced the Blackwell, which boasts 208 billion transistors - but it won’t ship until later this year.

    RECOMMENDED MEDIA 

    Chip War: The Fight For the World’s Most Critical Technology by Chris Miller

    To make sense of the current state of politics, economics, and technology, we must first understand the vital role played by chips

    Gordon Moore Biography & Facts

    Gordon Moore, the Intel co-founder behind Moore's Law, passed away in March of 2023

    AI’s most popular chipmaker Nvidia is trying to use AI to design chips faster

    Nvidia's GPUs are in high demand - and the company is using AI to accelerate chip production

    RECOMMENDED YUA EPISODES

    Future-proofing Democracy In the Age of AI with Audrey Tang

    How Will AI Affect the 2024 Elections? with Renee DiResta and Carl Miller

    The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen Hao

    Protecting Our Freedom of Thought with Nita Farahany

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

     

     

    Dr Catherine Breslin on Building Alexa to Be More Human

    Dr Catherine Breslin on Building Alexa to Be More Human

    In today’s episode, I speak to AI and Machine Learning Scientist and founder of Kingfisher Labs - Dr Catherine Breslin. Catherine spent several years in academic research before she joined the Amazon Alexa team during its infancy back in 2014. Whilst there, Catherine managed the Cambridge-based AI Alexa team which were working on inventing foundational Machine Learning tech to build intelligent conversational interfaces for a myriad of devices, apps, languages and environments. The team also worked on technology that enabled the automatic speech recognition and natural language understanding behind Amazon’s Alexa.

    Catherine holds a First Class Honours degree in engineering and computer science from Oxford University, a Masters in the field of Speech, Text and Internet Technology from the University of Cambridge and a PhD in Engineering and Automatic Speech Recognition, also from Cambridge.

    In this fascinating conversation, we talk about how Catherine got into engineering and what led her to the field of speech recognition, what the early days of working on Alexa were like and what the wins and issues were when it first launched.

    We also talk about the future of smart devices, what working on Alexa has taught her about human nature, how hard it is from a science perspective, to turn virtual assistants into true companions and how far out we are from achieving AGI - (artificial general intelligence).

    I hope you enjoy it!

    -----

    Let us know what you think of this episode and please rate, review and share - it means the world to me and helps others to find it too.

    ------

    Danielle on Twitter @daniellenewnham and  Instagram @daniellenewnham

    Catherine on Twitter @catherinebuk / Website / Instagram @catherinebreslin

    -----

    This episode was hosted by me - Danielle Newnham, a recovering founder, author and writer who has been interviewing tech founders and innovators for ten years - and produced by Jolin Cheng. 

    Series 1 of this podcast is sponsored by Sensate – the device which can help to reduce stress and anxiety in less than ten minutes a day. To get an exclusive $25 off your first purchase, simply head to Sensate and insert my discount code POD.

     

    Timnit Gebru: Is AI racist and antidemocratic? | Talk to Al Jazeera

    Timnit Gebru: Is AI racist and antidemocratic? | Talk to Al Jazeera
    Artificial intelligence has become an essential part of our life, though some say there is another more sinister side to it.

    Computer scientist Timnit Gebru has been one of the most critical voices against the unethical use of AI.

    Considered one of the 100 most influential people of 2022 by Time magazine, Google asked Gebru to co-lead its unit focused on ethical artificial intelligence.

    But the tech giant fired her after she criticised the company’s lucrative AI work.

    Who is behind AI technology? Whose interests does it serve? And how democratic is its use?

    Timnit Gebru talks to Al Jazeera.

    Subscribe to our channel http://bit.ly/AJSubscribe
    Follow us on Twitter https://twitter.com/AJEnglish
    Find us on Facebook https://www.facebook.com/aljazeera
    Check our website: http://www.aljazeera.com/
    Check out our Instagram page: https://www.instagram.com/aljazeeraenglish/

    @AljazeeraEnglish
    #Aljazeeraenglish
    #News

    Chatbots for Civic Engagement, with Simon Day

    Chatbots for Civic Engagement, with Simon Day

    Don't forget to sign up for the free Axios newsletter, and tag your best and worst examples of government social media posts with #SMandPwins and #SMandPfails on Twitter!

    Simon Day, co-founder of Apptivism, discusses how chatbots are used to increase civic engagement. By interacting with a chatbot on Facebook Messenger, citizens can give their opinion on policies from their computers or smartphones. Policymakers can then analyze the data from chatbot interactions to better shape policy. Simon breaks down how these chatbots work and describes how Apptivism is helping governments use this new technology.