Logo
    Search

    Mini Episode: TikTok, Cheap Deepfakes, AI in 2020, and Deference

    enAugust 09, 2020

    Podcast Summary

    • AI-driven platforms and data collectionMicrosoft's acquisition of TikTok highlights the value of AI-driven platforms for data collection and innovation, while the ease and affordability of creating deepfakes pose a threat to authenticity and trust.

      Technology companies are increasingly looking to AI-driven platforms for data collection and innovation. Microsoft's potential acquisition of TikTok is a prime example, as the social media app's vast video data could significantly advance Microsoft's AI capabilities. Meanwhile, the ease and affordability of creating deepfakes using AI technology pose a significant threat to authenticity and trust. In the case of Tom Hanks, deepfakes were used to create convincing images at a low cost. As technology continues to evolve, it's crucial to stay informed and vigilant against potential misuses of AI.

    • Deepfakes and AI bias intersect, requiring continued investmentDeepfakes, with improving quality, could spread disinfo. Society should invest in defenses, while updating AI data to reduce bias in new realities.

      Deepfakes, while currently not an immediate threat, have the potential to become a powerful tool for spreading disinformation in the future. Researcher Tim Huang believes that society should invest in defenses against deepfakes, as their quality is improving and their use could become more widespread. The sudden changes in social and cultural norms, such as those brought about by the COVID-19 pandemic and civil rights movements, are making it difficult for AI to accurately categorize images and understand new realities. For example, a photo of a father working at home with his son playing would be categorized as leisure, rather than work, by current AI models. The shift in norms also presents an opportunity to update AI data and reduce bias, but it's important to be careful in creating new content to avoid unintentional bias. Overall, the intersection of deepfakes and AI bias highlights the need for continued investment in technology and ethical considerations to mitigate their negative impacts.

    • AI adapts to human decision-making strengthsMIT researchers developed an AI system that optimizes when to defer to human decision-making based on their strengths and weaknesses, using separate machine learning models for decision-making and prediction.

      As AI becomes more prevalent, the interaction between humans and AI in decision-making processes is becoming a significant question. In the medical field, for instance, determining when AI should defer to human decision-making is a pressing issue. Researchers at MIT's Computer Science and AI Lab have developed an AI system that optimizes when the AI should defer based on the strengths and weaknesses of a human collaborator. This system uses two separate machine learning models: one that makes the actual decision and another that predicts whether the human or AI is the better decision maker. The researchers found that the AI system adapted to the expert's behavior and deferred when necessary in simple tasks like image recognition and hate speech detection. However, it's important to note that these results should be taken with caution, as real-life decisions are much more complex than lab scenarios. The hybrid approach could potentially be applied to complex decisions in healthcare and other fields, but extensive testing and iteration are necessary before any definitive conclusions can be drawn. In essence, while the potential for AI to collaborate with humans in decision-making is promising, it's crucial to approach these developments with a critical and cautious mindset.

    Recent Episodes from Last Week in AI

    #171 - - Apple Intelligence, Dream Machine, SSI Inc

    #171 - - Apple Intelligence, Dream Machine, SSI Inc

    Our 171st episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 24, 2024

    #170 - new Sora rival, OpenAI robotics, understanding GPT4, AGI by 2027?

    #170 - new Sora rival, OpenAI robotics, understanding GPT4, AGI by 2027?

    Our 170th episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 09, 2024

    #169 - Google's Search Errors, OpenAI news & DRAMA, new leaderboards

    #169 - Google's Search Errors, OpenAI news & DRAMA, new leaderboards

    Our 168th episode with a summary and discussion of last week's big AI news!

    Feel free to leave us feedback here: https://forms.gle/ngXvXZpNJxaAprDv6

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 03, 2024

    #168 - OpenAI vs Scar Jo + safety researchers, MS AI updates, cool Anthropic research

    #168 - OpenAI vs Scar Jo + safety researchers, MS AI updates, cool Anthropic research

    Our 168th episode with a summary and discussion of last week's big AI news!

    With guest host Gavin Purcell from AI for Humans podcast!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enMay 28, 2024

    #167 - GPT-4o, Project Astra, Veo, OpenAI Departures, Interview with Andrey

    #167 - GPT-4o, Project Astra, Veo, OpenAI Departures, Interview with Andrey

    Our 167th episode with a summary and discussion of last week's big AI news!

    With guest host Daliana Liu (https://www.linkedin.com/in/dalianaliu/) from The Data Scientist Show!

    And a special one-time interview with Andrey in the latter part of the podcast.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 19, 2024

    #166 - new AI song generator, Microsoft's GPT4 efforts, AlphaFold3, xLSTM, OpenAI Model Spec

    #166 - new AI song generator, Microsoft's GPT4 efforts, AlphaFold3, xLSTM, OpenAI Model Spec

    Our 166th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 12, 2024

    #165 - Sora challenger, Astribot's S1, Med-Gemini, Refusal in LLMs

    #165 - Sora challenger, Astribot's S1, Med-Gemini, Refusal in LLMs

    Our 165th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 05, 2024

    #164 - Meta AI, Phi-3, OpenELM, Bollywood Deepfakes

    #164 - Meta AI, Phi-3, OpenELM, Bollywood Deepfakes

    Our 164th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enApril 30, 2024

    #163 - Llama 3, Grok-1.5 Vision, new Atlas robot, RHO-1, Medium ban

    #163 - Llama 3, Grok-1.5 Vision, new Atlas robot, RHO-1, Medium ban

    Our 163rd episode with a summary and discussion of last week's big AI news!

    Note: apology for this one coming out a few days late, got delayed in editing it -Andrey

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enApril 24, 2024

    #162 - Udio Song AI, TPU v5, Mixtral 8x22, Mixture-of-Depths, Musicians sign open letter

    #162 - Udio Song AI, TPU v5, Mixtral 8x22, Mixture-of-Depths, Musicians sign open letter

    Our 162nd episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enApril 15, 2024

    Related Episodes

    The Information Disorder War

    The Information Disorder War

    How do you tackle Misinformation, Disinformation, and Malinformation in the age of Artificial Intelligence, Social Media, and Hyper Partisanship? What tools can be used to fight “Information Disorder”? Why is Finland unique in this respect and how can you effectively teach digital & media literacy? Join us as we travel to Spain for an illuminating conversation with Dr. Kari Kivinen, one of the leading global experts on Misinformation & Disinformation. Dr. Kivinen is the former Secretary-General of the European School system and the ex-Head of the French-Finnish School of Helsinki where he championed the Finnish Curriculum, which teaches students how to identify Misinformation, Disinformation and Propaganda from the primary school level.  

     

    Deepfakes and online misinformation in India’s election

    Deepfakes and online misinformation in India’s election

    A massive general election is currently underway in India. It’s been described as the “largest democratic exercise in history.” And tech platforms are a big part of it. Many Indian voters get their information online, where misinformation and disinformation can spread quickly. That includes deepfakes of prominent public figures, like Bollywood actor Aamir Khan, spreading false information about who or which political parties they are endorsing. Marketplace’s Lily Jamali spoke with Raman Jit Singh Chima, Asia Pacific policy director and senior international counsel with the international human rights group Access Now, about how deepfakes and online misinformation have become a problem for voters in India. They also discuss a recent report from Access Now and Global Witness, an environmental and human rights nonprofit, about YouTube’s advertisement moderation standards in India.

    A Deep Dive on Deepfakes

    A Deep Dive on Deepfakes

    This is a bonus episode of the podcast in a two part series covering a Nisos project with New York University (NYU) on the technical considerations for creating deepfakes as well as processes and procedures to detect them.

    (00:28) Introductions
    (01:06) Overview of Nisos Deepfake Project
    (02:21) Question 1 - What kind of research did you have to do to start making deepfake videos? 
    (03:51) Question 2 - Are there a lot of open source libraries intended to make building deepfakes easier? 
    (05:20) Question 3 - What was the most difficult part of the process for building a deepfake video?
    (06:15) Question 4 - How difficult is it to make deepfake audio?
    (06:56) Question 5 - How can we detect deepfakes?

    The Dark Side of Deepfakes: How Misinformation Can Be Weaponized

    The Dark Side of Deepfakes: How Misinformation Can Be Weaponized
    The Dark Side of Deepfakes: How Misinformation Can Be Weaponized

    "Deepfakes: The Opportunities and Risks of Synthetic Media"Deepfakes are a form of synthetic media that use artificial intelligence to manipulate audio and video to create realistic simulations of real people. While deepfake technology has the potential to be used for harmless entertainment or practical purposes, it also presents significant risks to society. The potential for the spread of misinformation or disinformation, harm to individuals, perpetration of fraud or other crimes, and devaluation of visual evidence are all potential concerns. As deepfake technology continues to evolve, it will be important for society to carefully consider its potential uses and implications.

    KURIOUS - FOR ALL THINGS STRANGE

    When Seeing No Longer Means Believing, What’s a Government to Do?

    When Seeing No Longer Means Believing, What’s a Government to Do?

    Deepfakes have a range of compelling applications in the modern communication and entertainment realms but the techniques underpinning them can also be repurposed for nefarious uses. This form of synthetic media, which often presents people doing and saying things they did not actually do or say, rose in popularity over the last several years, including seemingly real videos of public officials that have gone viral. And they’ve caught the attention of America’s federal and state governments, which are trying to figure out just what to do about these powerful tools for disinformation.