Logo

    Episode 19: The Murky Climate and Environmental Impact of Large Language Models, November 6 2023

    en-usNovember 08, 2023
    What was the main topic of the podcast episode?
    Summarise the key points discussed in the episode?
    Were there any notable quotes or insights from the speakers?
    Which popular books were mentioned in this episode?
    Were there any points particularly controversial or thought-provoking discussed in the episode?
    Were any current events or trending topics addressed in the episode?

    About this Episode

    Drs. Emma Strubell and Sasha Luccioni join Emily and Alex for an environment-focused hour of AI hype. How much carbon does a single use of ChatGPT emit? What about the water or energy consumption of manufacturing the graphics processing units that train various large language models? Why even catastrophic estimates from well-meaning researchers may not tell the full story.

    This episode was recorded on November 6, 2023.

    References:

    "The Carbon Footprint of Machine Learning Training Will Plateau, Then Shrink" 

    "The Carbon Emissions of Writing and Illustrating Are Lower for AI than for Humans" 

    The growing energy footprint of artificial intelligence
    - New York Times coverage: "AI Could Soon Need as Much Electricity as an Entire Country"

    "Energy and Policy Considerations for Deep Learning in NLP."
    "The 'invisible' materiality of information technology."
    "Counting Carbon: A Survey of Factors Influencing the Emissions of Machine Learning"
    "AI is dangerous, but not for the reasons you think." 

    Fresh AI Hell:

    Not the software to blame for deadly Tesla autopilot crash, but the company selling the software.

    4chan Uses Bing to Flood the Internet With Racist Images
    Followup from Vice: Generative AI Is a Disaster, and Companies Don’t Seem to Really Care

    Is this evidence for LLMs having an internal "world model"?

    “Approaching a universal Turing machine”

    Americans Are Asking AI: ‘Should I Get Back With My Ex?’


    You can check out future livestreams at https://twitch.tv/DAIR_Institute.


    Follow us!

    Emily

    Alex

    Music by Toby Menon.
    Artwork by Naomi Pleasure-Park.
    Production by Christie Taylor.

    Recent Episodes from Mystery AI Hype Theater 3000

    Episode 27: Asimov's Laws vs. 'AI' Death-Making (w/ Annalee Newitz & Charlie Jane Anders), February 19 2024

    Episode 27: Asimov's Laws vs. 'AI' Death-Making (w/ Annalee Newitz & Charlie Jane Anders), February 19 2024

    Science fiction authors and all-around tech thinkers Annalee Newitz and Charlie Jane Anders join this week to talk about Isaac Asimov's oft-cited and equally often misunderstood laws of robotics, as debuted in his short story collection, 'I, Robot.' Meanwhile, both global and US military institutions are declaring interest in 'ethical' frameworks for autonomous weaponry.

    Plus, in AI Hell, a ballsy scientific diagram heard 'round the world -- and a proposal for the end of books as we know it, from someone who clearly hates reading.

    Charlie Jane Anders is a science fiction author. Her recent and forthcoming books include Promises Stronger Than Darkness in the ‘Unstoppable’ trilogy, the graphic novel New Mutants: Lethal Legion, and the forthcoming adult novel Prodigal Mother.

    Annalee Newitz is a science journalist who also writes science fiction. Their most recent novel is The Terraformers, and in June you can look forward to their nonfiction book, Stories Are Weapons: Psychological Warfare and the American Mind.

    They both co-host the podcast, 'Our Opinions Are Correct', which explores how science fiction is relevant to real life and our present society.

    Also, some fun news: Emily and Alex are writing a book! Look forward (in spring 2025) to The AI Con, a narrative takedown of the AI bubble and its megaphone-wielding boosters that exposes how tech’s greedy prophets aim to reap windfall profits from the promise of replacing workers with machines.

    Watch the video of this episode on PeerTube.

    References:

    International declaration on "Responsible Military Use of Artificial Intelligence and Autonomy" provides "a normative framework addressing the use of these capabilities in the military domain."

    DARPA's 'ASIMOV' program to "objectively and quantitatively measure the ethical difficulty of future autonomy use-cases...within the context of military operational values."
    Short version
    Long version (pdf download)

    Fresh AI Hell:

    "I think we will stop publishing books, but instead publish “thunks”, which are nuggets of thought that can interact with the “reader” in a dynamic and multimedia way."

    AI generated illustrations in a scientific paper -- rat balls edition.


    You can check out future livestreams at https://twitch.tv/DAIR_Institute.


    Follow us!

    Emily

    Alex

    Music by Toby Menon.
    Artwork by Naomi Pleasure-Park.
    Production by Christie Taylor.

    Episode 26: Universities Anxiously Buy in to the Hype (feat. Chris Gilliard), February 5 2024

    Episode 26: Universities Anxiously Buy in to the Hype (feat. Chris Gilliard), February 5 2024

    Just Tech Fellow Dr. Chris Gilliard aka "Hypervisible" joins Emily and Alex to talk about the wave of universities adopting AI-driven educational technologies, and the lack of protections they offer students in terms of data privacy or even emotional safety.

    References:


    Inside Higher Ed: Arizona State Joins ChatGPT in First Higher Ed Partnership

    ASU press release version: New Collaboration with OpenAI Charts theFuture of AI in Higher Education

    MLive: Your Classmate Could Be an AI Student at this Michigan University

    Chris Gilliard: How Ed Tech Is Exploiting Students


    Fresh AI Hell:


    Various: “AI learns just like a kid”
    Infants' gaze teaches AI the nuances of language acquisition
    Similar from NeuroscienceNews

    Politico: Psychologist apparently happy with fake version of himself

    WSJ: Employers Are Offering a New Worker Benefit: Wellness Chatbots

    NPR: Artificial intelligence can find your location in photos, worrying privacy expert

    Palette cleanser: Goodbye to NYC's useless robocop.



    You can check out future livestreams at https://twitch.tv/DAIR_Institute.


    Follow us!

    Emily

    Alex

    Music by Toby Menon.
    Artwork by Naomi Pleasure-Park.
    Production by Christie Taylor.

    Episode 25: An LLM Says LLMs Can Do Your Job, January 22 2024

    Episode 25: An LLM Says LLMs Can Do Your Job, January 22 2024

    Is ChatGPT really going to take your job? Emily and Alex unpack two hype-tastic papers that make implausible claims about the number of workforce tasks LLMs might make cheaper, faster or easier. And why bad methodology may still trick companies into trying to replace human workers with mathy-math.

    Visit us on PeerTube for the video of this conversation.

    References:

    OpenAI: GPTs are GPTs
    Goldman Sachs: The Potentially Large Effects of Artificial Intelligence on Economic Growth

    FYI: Over the last 60 years, automation has totally eliminated just one US occupation.


    Fresh AI Hell:

    Microsoft adding a dedicated "AI" key to PC keyboards.

    The AI-led enshittification at Duolingo

    University of Washington Provost highlighting “AI”
    “Using ChatGPT, My AI eBook Creation Pro helps you write an entire e-book with just three clicks -- no writing or technical experience required.”
    "Can you add artificial intelligence to the hydraulics?"


    You can check out future livestreams at https://twitch.tv/DAIR_Institute.


    Follow us!

    Emily

    Alex

    Music by Toby Menon.
    Artwork by Naomi Pleasure-Park.
    Production by Christie Taylor.

    Episode 24: AI Won't Solve Structural Inequality (feat. Kerry McInerney & Eleanor Drage), January 8 2024

    Episode 24: AI Won't Solve Structural Inequality (feat. Kerry McInerney & Eleanor Drage), January 8 2024

    New year, same Bullshit Mountain. Alex and Emily are joined by feminist technosolutionism critics Eleanor Drage and Kerry McInerney to tear down the ways AI is proposed as a solution to structural inequality, including racism, ableism, and sexism -- and why this hype can occlude the need for more meaningful changes in institutions.

    Dr. Eleanor Drage is a Senior Research Fellow at the Leverhulme Centre for the Future of Intelligence. Dr. Kerry McInerney is a Research Fellow at the Leverhulme Centre for the Future of Intelligence and a Research Fellow at the AI Now Institute. Together they host The Good Robot, a podcast about gender, feminism, and whether technology can be "good" in either outcomes or processes.

    Watch the video version of this episode on PeerTube.

    References:

    HireVue promo: How Innovative Hiring Technology Nurtures Diversity, Equity, and Inclusion

    Algorithm Watch: The [German Federal Asylum Agency]'s controversial dialect recognition software: new languages and an EU pilot project

    Want to see how AI might be processing video of your face during a job interview? Play with React App, a tool that Eleanor helped develop to critique AI-powered video interview tools and the 'personality insights' they offer.

    Philosophy & Technology: Does AI Debias Recruitment? Race, Gender, and AI’s “Eradication of Difference” (Drage & McInerney, 2022)

    Communication and Critical/Cultural Studies: Copies without an original: the performativity of biometric bordering technologies (Drage & Frabetti, 2023)

    Fresh AI Hell

    Internet of Shit 2.0: a "smart" bidet

    Fake AI “students” enrolled at Michigan University

    Synthetic images destroy online crochet groups

    “AI” for teacher performance feedback

    Palette cleanser: “Stochastic parrot” is the American Dialect Society’s AI-related word of the year for 2023!


    You can check out future livestreams at https://twitch.tv/DAIR_Institute.


    Follow us!

    Emily

    Alex

    Music by Toby Menon.
    Artwork by Naomi Pleasure-Park.
    Production by Christie Taylor.

    Episode 23: AI Hell Freezes Over, December 22 2023

    Episode 23: AI Hell Freezes Over, December 22 2023

    AI Hell has frozen over for a single hour. Alex and Emily visit all seven circles in a tour of the worst in bite-sized BS.

    References:

    Pentagon moving toward letting AI weapons autonomously kill humans

    NYC Mayor uses AI to make robocalls in languages he doesn’t speak

    University of Michigan investing in OpenAI

    Tesla: claims of “full self-driving” are free speech

    LLMs may not "understand" output

    'Maths-ticated' data

    LLMs can’t analyze an SEC filing

    How GPT-4 can be used to create fake datasets

    Paper thanking GPT-4 concludes LLMs are good for science

    Will AI Improve Healthcare? Consumers Think So

    US struggling to regulate AI in healthcare

    Andrew Ng's low p(doom)

    Presenting the “Off-Grid AGI Safety Facility”

    Chess is in the training data

    DropBox files now shared with OpenAI

    Underline.io and ‘commercial exploitation’

    Axel Springer, OpenAI strike "real-time news" deal

    Adobe Stock selling AI-generated images of Israel-Hamas conflict

    Sports Illustrated Published Articles by AI Writers

    Cruise confirms robotaxis rely on human assistance every 4-5 miles

    Underage workers training AI, exposed to traumatic content

    Prisoners training AI in Finland

    ChatGPT gives better output in response to emotional language
    - An explanation for bad AI journalism

    UK judges now permitted to use ChatGPT in legal rulings.

    Michael Cohen's attorney apparently used generative AI in court petition

    Brazilian city enacts ordinance secretly written by ChatGPT

    The lawyers getting fired for using ChatGPT

    Using sequences of life-events to predict human lives

    Your palette-cleanser: Is my toddler a stochastic parrot?


    You can check out future livestreams at https://twitch.tv/DAIR_Institute.


    Follow us!

    Emily

    Alex

    Music by Toby Menon.
    Artwork by Naomi Pleasure-Park.
    Production by Christie Taylor.

    Episode 22: Congressional 'AI' Hearings Say More about Lawmakers (feat. Justin Hendrix), December 18 2023

    Episode 22: Congressional 'AI' Hearings Say More about Lawmakers (feat. Justin Hendrix), December 18 2023

    Congress spent 2023 busy with hearings to investigate the capabilities, risks and potential uses of large language models and other 'artificial intelligence' systems. Alex and Emily, plus journalist Justin Hendrix, talk about the limitations of these hearings, the alarmist fixation on so-called 'p(doom)' and overdue laws on data privacy.

    Justin Hendrix is editor of the Tech Policy Press.


    References:

    TPP tracker for the US Senate 'AI Insight Forum' hearings

    Balancing Knowledge and Governance: Foundations for Effective Risk Management of AI (featuring Emily)

    Emily's opening remarks at virtual roundtable on AI
    Senate hearing addressing national security implications of AI
    Video: Rep. Nancy Mace opens hearing with ChatGPT-generated statement.
    Brennan Center report on Department of Homeland Security: Overdue Scrutiny for Watch Listing and Risk Prediction
    TPP: Senate Homeland Security Committee Considers Philosophy of AI
    Alex & Emily's appearance on the Tech Policy Press Podcast

    Fresh AI Hell:

    Asylum seekers vs AI-powered translation apps

    UK officials use AI to decide on issues from benefits to marriage licenses

    Prior guest Dr. Sarah Myers West testifying on AI concentration


    You can check out future livestreams at https://twitch.tv/DAIR_Institute.


    Follow us!

    Emily

    Alex

    Music by Toby Menon.
    Artwork by Naomi Pleasure-Park.
    Production by Christie Taylor.

    Episode 21: The True Meaning of 'Open Source' (feat. Sarah West and Andreas Liesenfeld), November 20 2023

    Episode 21: The True Meaning of 'Open Source' (feat. Sarah West and Andreas Liesenfeld), November 20 2023

    Researchers Sarah West and Andreas Liesenfeld join Alex and Emily to examine what software companies really mean when they say their work is 'open source,' and call for greater transparency.

    This episode was recorded on November 20, 2023.

    Dr. Sarah West is the managing director of the AI Now Institute. Her award-winning research and writing blends social science, policy, and historical methods to address the intersection of technology, labor, antitrust, and platform accountability. And she’s the author of the forthcoming book, "Tracing Code."

    Dr. Andreas Liesenfeld is assistant professor in both the Centre for Language Studies and department of language and communication at Radboud University in the Netherlands. He’s a co-author on research from this summer critically examining the true “open source” nature of models like LLaMA and ChatGPT – concluding.


    References:

    Yann LeCun testifies on 'open source' work at Meta

    Meta launches LLaMA 2

    Stanford Human-Centered AI's new transparency index

    Opening up ChatGPT (Andreas Liesenfeld's work)

    Fresh AI Hell:

    Sam Altman out at OpenAI

    The Verge: Meta disbands their Responsible AI team

    Ars Technica: Lawsuit claims AI with 90 percent error rate forces elderly out of rehab, nursing homes

    Call-out of Stability and others' use of “fair use” in AI-generated art

    A fawning profile of OpenAI's Ilya Sutskever


    You can check out future livestreams at https://twitch.tv/DAIR_Institute.


    Follow us!

    Emily

    Alex

    Music by Toby Menon.
    Artwork by Naomi Pleasure-Park.
    Production by Christie Taylor.

    Episode 20: Let's Do the Time Warp! (to the "Founding" of "Artificial Intelligence"), November 6 2023

    Episode 20: Let's Do the Time Warp! (to the "Founding" of "Artificial Intelligence"), November 6 2023

    Emily and Alex time travel back to a conference of men who gathered at Dartmouth College in the summer of 1956 to examine problems relating to computation and "thinking machines," an event commonly mythologized as the founding of the field of artificial intelligence. But our crack team of AI hype detectives is on the case with a close reading of the grant proposal that started it all.

    This episode was recorder on November 6, 2023. Watch the video version on PeerTube.

    References:

    "A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence" (1955)

    Re: methodological individualism, "The Role of General Theory in Comparative-historical Sociology," American Journal of Sociology, 1991


    Fresh AI Hell:

    Silly made-up graph about “intelligence” of AI vs. “intelligence” of AI criticism

    How AI is perpetuating racism and other bias against Palestinians:
    The UN hired an AI company with "realistic virtual simulations" of Israel and Palestine
    WhatsApp's AI sticker generator is feeding users images of Palestinian children holding guns
    The Guardian on the same issue
    Instagram 'Sincerely Apologizes' For Inserting 'Terrorist' Into Palestinian Bio Translations

    Palette cleanser: An AI-powered smoothie shop shut down almost immediately after opening.

    OpenAI chief scientist: Humans could become 'part AI' in the future

    A Brief History of Intelligence: Why the evolution of the brain holds the key to the future of AI.

    AI-centered 'monastic academy':“MAPLE is a community of practitioners exploring the intersection of AI and wisdom.”


    You can check out future livestreams at https://twitch.tv/DAIR_Institute.


    Follow us!

    Emily

    Alex

    Music by Toby Menon.
    Artwork by Naomi Pleasure-Park.
    Production by Christie Taylor.

    Episode 19: The Murky Climate and Environmental Impact of Large Language Models, November 6 2023

    Episode 19: The Murky Climate and Environmental Impact of Large Language Models, November 6 2023

    Drs. Emma Strubell and Sasha Luccioni join Emily and Alex for an environment-focused hour of AI hype. How much carbon does a single use of ChatGPT emit? What about the water or energy consumption of manufacturing the graphics processing units that train various large language models? Why even catastrophic estimates from well-meaning researchers may not tell the full story.

    This episode was recorded on November 6, 2023.

    References:

    "The Carbon Footprint of Machine Learning Training Will Plateau, Then Shrink" 

    "The Carbon Emissions of Writing and Illustrating Are Lower for AI than for Humans" 

    The growing energy footprint of artificial intelligence
    - New York Times coverage: "AI Could Soon Need as Much Electricity as an Entire Country"

    "Energy and Policy Considerations for Deep Learning in NLP."
    "The 'invisible' materiality of information technology."
    "Counting Carbon: A Survey of Factors Influencing the Emissions of Machine Learning"
    "AI is dangerous, but not for the reasons you think." 

    Fresh AI Hell:

    Not the software to blame for deadly Tesla autopilot crash, but the company selling the software.

    4chan Uses Bing to Flood the Internet With Racist Images
    Followup from Vice: Generative AI Is a Disaster, and Companies Don’t Seem to Really Care

    Is this evidence for LLMs having an internal "world model"?

    “Approaching a universal Turing machine”

    Americans Are Asking AI: ‘Should I Get Back With My Ex?’


    You can check out future livestreams at https://twitch.tv/DAIR_Institute.


    Follow us!

    Emily

    Alex

    Music by Toby Menon.
    Artwork by Naomi Pleasure-Park.
    Production by Christie Taylor.

    Episode 18: Rumors of Artificial General Intelligence Have Been Greatly Exaggerated, October 23 2023

    Episode 18: Rumors of Artificial General Intelligence Have Been Greatly Exaggerated, October 23 2023

    Emily and Alex read through Google vice president Blaise Aguera y Arcas' recent proclamation that "artificial general intelligence is already here." Why this claim is a maze of hype and moving goalposts.

    References:

    Noema Magazine: "Artificial General Intelligence Is Already Here." 

    "AI and the Everything in the Whole Wide World Benchmark" 

    "Targeting the Benchmark: On Methodology and Current Natural Language Processing Research"

    "Recoding Gender: Women's Changing Participation in Computing"

    "The Computer Boys Take Over: Computers, Programmers, and the Politics of Technical Expertise"

    "Is chess the drosophila of artificial intelligence? A social history of an algorithm" 

    "The logic of domains"

    "Reckoning and Judgment"


    Fresh AI Hell:

    Using AI to meet "diversity goals" in modeling
    AI ushering in a "post-plagiarism" era in writing

    "Wildly effective and dirt cheap AI therapy."

    Applying AI to "improve diagnosis for patients with rare diseases."

    Using LLMs in scientific research

    Health insurance company Cigna using AI to deny medical claims.

    AI for your wearable-based workout


    You can check out future livestreams at https://twitch.tv/DAIR_Institute.


    Follow us!

    Emily

    Alex

    Music by Toby Menon.
    Artwork by Naomi Pleasure-Park.
    Production by Christie Taylor.

    Logo

    © 2024 Podcastworld. All rights reserved

    Stay up to date

    For any inquiries, please email us at hello@podcastworld.io