Logo
    Search

    Podcast Summary

    • Bridging the divide between immediate and long-term AI risksUrgent collaboration and dialogue among those with different perspectives is needed to address the interconnected risks and harms of AI, from immediate to long-term, and recognize the impact on individuals.

      The concerns surrounding the risks and harms of AI should not be viewed as a divide or schism between immediate and longer term risks, but rather as a spectrum of interconnected issues. The urgency of addressing all these risks requires collaboration and dialogue among those with different perspectives. A compelling example of this was when Dr. Joy Buolamwini shared the story of Robert Williams, a man falsely identified by facial recognition technology and arrested in front of his daughters, to humanize the immediate harms of AI and spark a conversation about its impact on individuals in the criminal legal system. This story, among others, helped shape the conversation during a meeting with President Biden, emphasizing the importance of recognizing the real people affected by AI and the need to address its biases and harms.

    • AI systems can perpetuate biases based on factors like race, gender, age, and abilityAI systems learn from biased data sets, leading to skewed results and a false sense of progress, particularly in recognizing people of color and women.

      AI systems, including facial recognition technologies, can embed and perpetuate biases based on factors like race, gender, age, and ability. This is due to the fact that these systems learn from large data sets, which may be biased themselves, leading to skewed results and a false sense of progress. For instance, facial recognition systems have been found to perform poorly when it comes to recognizing people of color and women. This issue is not limited to facial recognition, but rather, it's a systemic problem affecting various AI models. It's important to recognize that no one is immune to this issue, as biased AI can be found in various settings, such as schools and hospitals. This is a significant concern that has gained more attention in recent years, as the use of AI continues to expand. It's crucial to be aware of this issue and to strive for more inclusive and unbiased data sets and AI models.

    • Understanding the difference between AGI and superintelligenceAGI refers to AI systems capable of performing economic work like humans, while superintelligence implies sentient systems with independent thought and emotion. We should acknowledge the power of AI and have clear conversations about potential harms and ethical implications.

      While we are making strides in developing AI models for various applications, including cancer detection and heart disease prediction, it's essential to be precise about the type and capabilities of AI we're discussing. AGI, or Artificial General Intelligence, refers to systems that can perform economic work similar to a human being. However, it's crucial not to confuse AGI with superintelligence, which implies sentient systems capable of independent thought and emotion, a concept some experts believe we won't achieve in the near future. It's vital to acknowledge the power of AI systems and the potential harm they can cause if misaligned with human values. Misalignment can manifest in various ways, from discrimination and hate speech to more harmful actions. Therefore, it's crucial to have clear and open conversations about the potential harms and ethical implications of AI, rather than using softer terms like "alignment" that might obscure the issues.

    • Cautionary Tale of AI Overhyping and Corporate CaptureRecognize potential harm of overhyping AI and role of incentives, remain vigilant, and involve all stakeholders in the conversation to prevent corporate capture of regulatory processes.

      While specific language can help guide the implementation of guardrails, it's crucial to recognize the potential harm of overhyping AI and the role incentives play in accelerating its deployment before addressing underlying issues. Social media serves as a cautionary tale, and while some companies express concerns about AI risks, they continue to rush its implementation due to profit incentives. This creates a contradiction, and without proper regulation and guardrails in place, there's a risk of corporate capture of regulatory and legislative processes. As a computer scientist, it's tempting to approach problems as technical issues, but it's essential to recognize that many of these issues are design problems that require a multifaceted approach, including policy and advocacy. We must remain vigilant and ensure that all stakeholders are involved in the conversation, but corporations should not hold the pen of legislation.

    • Institutions' Actions vs. Stated Concerns for Tech RisksDespite stated concerns, institutions may prioritize control and profit over safety, leading to a misalignment between intentions and actions. Regulation could further concentrate power, requiring careful consideration.

      While there are valid concerns about the potential risks and negative impacts of technology, particularly AI, on individuals and society, the actions of the institutions creating these technologies may not align with their stated concerns. The discussion highlighted the role of incentives and power dynamics, with companies potentially prioritizing control and profit over safety. However, there is a disagreement on whether companies genuinely underestimate the risks or are genuinely concerned but helpless. Regulation is proposed as a solution, but it could also lead to further concentration of power. It's essential to recognize the difference between the intentions and actions of individuals within these institutions and the institutions themselves. The lack of alignment between stated concerns and actions raises questions about the sincerity and commitment of these institutions to address the risks associated with their technologies.

    • Profit drives AI development despite risksCompanies prioritize short-term gains over long-term societal impacts when developing and releasing advanced AI technologies, potentially causing risks and dangers.

      The profit motive of for-profit companies plays a significant role in the development and release of advanced AI technologies, even if they pose potential risks. The argument for building these technologies responsibly and safely can be outweighed by the short-term gains and market dynamics. However, the long-term societal impacts and potential dangers should not be ignored. The choices made by companies, such as releasing models ahead of schedule or making them open source, can have serious consequences, particularly during sensitive periods like elections. Despite concerns about other nations developing AI technologies, these companies are not tied to individual nations and may prioritize market dominance or opposition. The complexities of these issues require careful consideration and a balanced approach.

    • The need for regulations in AI developmentGovernments and individuals must push for regulations and long-term thinking to ensure safe and ethical AI development and implementation, balancing innovation with societal well-being.

      The race for market dominance in artificial intelligence (AI) has intensified due to the rapid public release and adoption of powerful AI systems like ChatGPT. This has created a "Wild, Wild West" scenario where innovation and guardrails are perceived as opposing forces. However, as professor Virginia Dingnam pointed out, AI is like an unchecked car on the road, driven by an unlicensed driver. Therefore, it's crucial for governments and individuals to push for regulations and long-term thinking to ensure the safe and ethical development and implementation of AI. The current short-term focus of tech companies and governments on quick gains and market dominance risks compromising the well-being of society and the potential harms of AI. To prevent this, there's a need for a global coordinated effort to balance innovation with regulations, and for stakeholders to be willing to make sacrifices for the greater good.

    • Considering Ethical Implications of Open Source in AI and DataEnsure consent and permission when dealing with data in open source AI projects, prevent potential harms by vetting models built on stolen or unconsented data.

      While open source has the power to democratize access to tools and resources, it's crucial to consider the ethical implications, particularly in the context of AI and data. Dr. Joy Buolamwini, who shared her experiences of benefiting from open source, emphasized the importance of ensuring consent and permission when dealing with data. She raised concerns about the open sourcing of models built on stolen or unconsented data, which could lead to chains of bias and discrimination. She urged caution and necessary vetting to prevent potential harms. In the age of AI, progress will be defined not only by what we say yes to but also by what we say no to, and it's essential to approach open source with this mindset.

    • Balancing Open Sourcing with Ethics in AIThe benefits of open sourcing AI must be balanced with ethical considerations and regulations to ensure safe and responsible development.

      The concept of open sourcing in the tech industry, particularly in AI, is a complex issue with ethical implications. While the idea of making tools accessible to many minds to find robust solutions is appealing, the lack of regulations and ethical considerations can lead to potential harm. The speaker shares her personal experience with the ethical dilemmas she faced during her computer vision research as a grad student, where she was able to use people's faces without additional ethical steps due to an exemption. However, as the value of datasets increases, the need for regulations and ethical considerations becomes more important. The speaker expresses concern over the responsible release of AI models like Llama 2, as she believes it's too soon in the development of AI to bypass safety checks and regulations. She also notes the schism between those focusing on AI bias, discrimination, and ethics, and those focusing on AI safety, and questions whether this division is necessary. Overall, the speaker emphasizes the importance of balancing the benefits of open sourcing with ethical considerations and regulations to ensure the safe and responsible development of AI.

    • Addressing Immediate and Emerging Harms of AIUrgent action is needed to address real-world harms of AI, such as bias and discrimination, while also considering long-term risks and power dynamics in the debate.

      While there may be disagreements and varying levels of concern within the AI safety community regarding the potential risks and harms of artificial intelligence, it's crucial to prioritize addressing immediate and emerging harms alongside longer-term considerations. Framing discussions in terms of a schism between different viewpoints can be misleading and distract from the urgent need to address real-world issues, such as bias and discrimination, that are impacting people's lives today. It's essential to consider power dynamics and whose narratives are being served in debates about AI risks, and to focus on helping those who are currently being harmed by AI systems rather than solely focusing on hypothetical future risks. By prioritizing immediate action, we have an opportunity to mitigate harm and improve societal well-being.

    • Balancing progress and safety in AI developmentWe must prioritize building safety checks, regulations, and ethical considerations to harness the benefits of AI while mitigating potential risks.

      While it's important to consider and address potential risks associated with advanced technologies like AI, it's crucial not to let fear or doomerism overshadow progress and the implementation of safety measures. Humanity as a whole can benefit from these technologies if we prioritize building safety checks, regulations, and ethical considerations. As Doctor Joy Buolamwini emphasized, we must remember that we are more than just neural nets and data, and our humanity should be a guiding principle in the development and deployment of AI. The poem "Unstable Desire" by Doctor Joy Buolamwini beautifully encapsulates the themes of this conversation, reminding us to be mindful of the potential consequences of our actions and to strive for a future where technology enhances and complements our humanity, rather than replacing it. The Center for Humane Technology, a non-profit organization, is dedicated to promoting a humane approach to technology and innovation, and their podcast, "Your Undivided Attention," is an excellent resource for exploring these important issues.

    Recent Episodes from Your Undivided Attention

    Why Are Migrants Becoming AI Test Subjects? With Petra Molnar

    Why Are Migrants Becoming AI Test Subjects? With Petra Molnar

    Climate change, political instability, hunger. These are just some of the forces behind an unprecedented refugee crisis that’s expected to include over a billion people by 2050. In response to this growing crisis, wealthy governments like the US and the EU are employing novel AI and surveillance technologies to slow the influx of migrants at their borders. But will this rollout stop at the border?

    In this episode, Tristan and Aza sit down with Petra Molnar to discuss how borders have become a proving ground for the sharpest edges of technology, and especially AI. Petra is an immigration lawyer and co-creator of the Migration and Technology Monitor. Her new book is “The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence.”

    RECOMMENDED MEDIA

    The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence

    Petra’s newly published book on the rollout of high risk tech at the border.

    Bots at the Gate

    A report co-authored by Petra about Canada’s use of AI technology in their immigration process.

    Technological Testing Grounds

    A report authored by Petra about the use of experimental technology in EU border enforcement.

    Startup Pitched Tasing Migrants from Drones, Video Reveals

    An article from The Intercept, containing the demo for Brinc’s taser drone pilot program.

    The UNHCR

    Information about the global refugee crisis from the UN.

    RECOMMENDED YUA EPISODES

    War is a Laboratory for AI with Paul Scharre

    No One is Immune to AI Harms with Dr. Joy Buolamwini

    Can We Govern AI? With Marietje Schaake

    CLARIFICATION:

    The iBorderCtrl project referenced in this episode was a pilot project that was discontinued in 2019

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    This week, a group of current and former employees from Open AI and Google Deepmind penned an open letter accusing the industry’s leading companies of prioritizing profits over safety. This comes after a spate of high profile departures from OpenAI, including co-founder Ilya Sutskever and senior researcher Jan Leike, as well as reports that OpenAI has gone to great lengths to silence would-be whistleblowers. 

    The writers of the open letter argue that researchers have a “right to warn” the public about AI risks and laid out a series of principles that would protect that right. In this episode, we sit down with one of those writers: William Saunders, who left his job as a research engineer at OpenAI in February. William is now breaking the silence on what he saw at OpenAI that compelled him to leave the company and to put his name to this letter. 

    RECOMMENDED MEDIA 

    The Right to Warn Open Letter 

    My Perspective On "A Right to Warn about Advanced Artificial Intelligence": A follow-up from William about the letter

     Leaked OpenAI documents reveal aggressive tactics toward former employees: An investigation by Vox into OpenAI’s policy of non-disparagement.

    RECOMMENDED YUA EPISODES

    1. A First Step Toward AI Regulation with Tom Wheeler 
    2. Spotlight on AI: What Would It Take For This to Go Well? 
    3. Big Food, Big Tech and Big AI with Michael Moss 
    4. Can We Govern AI? with Marietje Schaake

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    War is a Laboratory for AI with Paul Scharre

    War is a Laboratory for AI with Paul Scharre

    Right now, militaries around the globe are investing heavily in the use of AI weapons and drones.  From Ukraine to Gaza, weapons systems with increasing levels of autonomy are being used to kill people and destroy infrastructure and the development of fully autonomous weapons shows little signs of slowing down. What does this mean for the future of warfare? What safeguards can we put up around these systems? And is this runaway trend toward autonomous warfare inevitable or will nations come together and choose a different path? In this episode, Tristan and Daniel sit down with Paul Scharre to try to answer some of these questions. Paul is a former Army Ranger, the author of two books on autonomous weapons and he helped the Department of Defense write a lot of its policy on the use of AI in weaponry. 

    RECOMMENDED MEDIA

    Four Battlegrounds: Power in the Age of Artificial Intelligence: Paul’s book on the future of AI in war, which came out in 2023.

    Army of None: Autonomous Weapons and the Future of War: Paul’s 2018 book documenting and predicting the rise of autonomous and semi-autonomous weapons as part of modern warfare.

    The Perilous Coming Age of AI Warfare: How to Limit the Threat of Autonomous Warfare: Paul’s article in Foreign Affairs based on his recent trip to the battlefield in Ukraine.

    The night the world almost almost ended: A BBC documentary about Stanislav Petrov’s decision not to start nuclear war.

    AlphaDogfight Trials Final Event: The full simulated dogfight between an AI and human pilot. The AI pilot swept, 5-0.

    RECOMMENDED YUA EPISODES

    1. The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen Hao
    2. Can We Govern AI? with Marietje Schaake
    3. Big Food, Big Tech and Big AI with Michael Moss
    4. The Invisible Cyber-War with Nicole Perlroth

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    AI and Jobs: How to Make AI Work With Us, Not Against Us With Daron Acemoglu

    AI and Jobs: How to Make AI Work With Us, Not Against Us With Daron Acemoglu

    Tech companies say that AI will lead to massive economic productivity gains. But as we know from the first digital revolution, that’s not what happened. Can we do better this time around?

    RECOMMENDED MEDIA

    Power and Progress by Daron Acemoglu and Simon Johnson Professor Acemoglu co-authored a bold reinterpretation of economics and history that will fundamentally change how you see the world

    Can we Have Pro-Worker AI? Professor Acemoglu co-authored this paper about redirecting AI development onto the human-complementary path

    Rethinking Capitalism: In Conversation with Daron Acemoglu The Wheeler Institute for Business and Development hosted Professor Acemoglu to examine how technology affects the distribution and growth of resources while being shaped by economic and social incentives

    RECOMMENDED YUA EPISODES

    1. The Three Rules of Humane Tech
    2. The Tech We Need for 21st Century Democracy
    3. Can We Govern AI?
    4. An Alternative to Silicon Valley Unicorns

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Jonathan Haidt On How to Solve the Teen Mental Health Crisis

    Jonathan Haidt On How to Solve the Teen Mental Health Crisis

    Suicides. Self harm. Depression and anxiety. The toll of a social media-addicted, phone-based childhood has never been more stark. It can be easy for teens, parents and schools to feel like they’re trapped by it all. But in this conversation with Tristan Harris, author and social psychologist Jonathan Haidt makes the case that the conditions that led to today’s teenage mental health crisis can be turned around – with specific, achievable actions we all can take starting today.

    This episode was recorded live at the San Francisco Commonwealth Club.  

    Correction: Tristan mentions that 40 Attorneys General have filed a lawsuit against Meta for allegedly fostering addiction among children and teens through their products. However, the actual number is 42 Attorneys General who are taking legal action against Meta.

    Clarification: Jonathan refers to the Wait Until 8th pledge. By signing the pledge, a parent  promises not to give their child a smartphone until at least the end of 8th grade. The pledge becomes active once at least ten other families from their child’s grade pledge the same.

    Chips Are the Future of AI. They’re Also Incredibly Vulnerable. With Chris Miller

    Chips Are the Future of AI. They’re Also Incredibly Vulnerable. With Chris Miller

    Beneath the race to train and release more powerful AI models lies another race: a race by companies and nation-states to secure the hardware to make sure they win AI supremacy. 

    Correction: The latest available Nvidia chip is the Hopper H100 GPU, which has 80 billion transistors. Since the first commercially available chip had four transistors, the Hopper actually has 20 billion times that number. Nvidia recently announced the Blackwell, which boasts 208 billion transistors - but it won’t ship until later this year.

    RECOMMENDED MEDIA 

    Chip War: The Fight For the World’s Most Critical Technology by Chris Miller

    To make sense of the current state of politics, economics, and technology, we must first understand the vital role played by chips

    Gordon Moore Biography & Facts

    Gordon Moore, the Intel co-founder behind Moore's Law, passed away in March of 2023

    AI’s most popular chipmaker Nvidia is trying to use AI to design chips faster

    Nvidia's GPUs are in high demand - and the company is using AI to accelerate chip production

    RECOMMENDED YUA EPISODES

    Future-proofing Democracy In the Age of AI with Audrey Tang

    How Will AI Affect the 2024 Elections? with Renee DiResta and Carl Miller

    The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen Hao

    Protecting Our Freedom of Thought with Nita Farahany

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

     

     

    Future-proofing Democracy In the Age of AI with Audrey Tang

    Future-proofing Democracy In the Age of AI with Audrey Tang

    What does a functioning democracy look like in the age of artificial intelligence? Could AI even be used to help a democracy flourish? Just in time for election season, Taiwan’s Minister of Digital Affairs Audrey Tang returns to the podcast to discuss healthy information ecosystems, resilience to cyberattacks, how to “prebunk” deepfakes, and more. 

    RECOMMENDED MEDIA 

    Testing Theories of American Politics: Elites, Interest Groups, and Average Citizens by Martin Gilens and Benjamin I. Page

    This academic paper addresses tough questions for Americans: Who governs? Who really rules? 

    Recursive Public

    Recursive Public is an experiment in identifying areas of consensus and disagreement among the international AI community, policymakers, and the general public on key questions of governance

    A Strong Democracy is a Digital Democracy

    Audrey Tang’s 2019 op-ed for The New York Times

    The Frontiers of Digital Democracy

    Nathan Gardels interviews Audrey Tang in Noema

    RECOMMENDED YUA EPISODES 

    Digital Democracy is Within Reach with Audrey Tang

    The Tech We Need for 21st Century Democracy with Divya Siddarth

    How Will AI Affect the 2024 Elections? with Renee DiResta and Carl Miller

    The AI Dilemma

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    U.S. Senators Grilled Social Media CEOs. Will Anything Change?

    U.S. Senators Grilled Social Media CEOs. Will Anything Change?

    Was it political progress, or just political theater? The recent Senate hearing with social media CEOs led to astonishing moments — including Mark Zuckerberg’s public apology to families who lost children following social media abuse. Our panel of experts, including Facebook whistleblower Frances Haugen, untangles the explosive hearing, and offers a look ahead, as well. How will this hearing impact protocol within these social media companies? How will it impact legislation? In short: will anything change?

    Clarification: Julie says that shortly after the hearing, Meta’s stock price had the biggest increase of any company in the stock market’s history. It was the biggest one-day gain by any company in Wall Street history.

    Correction: Frances says it takes Snap three or four minutes to take down exploitative content. In Snap's most recent transparency report, they list six minutes as the median turnaround time to remove exploitative content.

    RECOMMENDED MEDIA 

    Get Media Savvy

    Founded by Julie Scelfo, Get Media Savvy is a non-profit initiative working to establish a healthy media environment for kids and families

    The Power of One by Frances Haugen

    The inside story of France’s quest to bring transparency and accountability to Big Tech

    RECOMMENDED YUA EPISODES

    Real Social Media Solutions, Now with Frances Haugen

    A Conversation with Facebook Whistleblower Frances Haugen

    Are the Kids Alright?

    Social Media Victims Lawyer Up with Laura Marquez-Garrett

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

     

     

    Taylor Swift is Not Alone: The Deepfake Nightmare Sweeping the Internet

    Taylor Swift is Not Alone: The Deepfake Nightmare Sweeping the Internet

    Over the past year, a tsunami of apps that digitally strip the clothes off real people has hit the market. Now anyone can create fake non-consensual sexual images in just a few clicks. With cases proliferating in high schools, guest presenter Laurie Segall talks to legal scholar Mary Anne Franks about the AI-enabled rise in deep fake porn and what we can do about it. 

    Correction: Laurie refers to the app 'Clothes Off.' It’s actually named Clothoff. There are many clothes remover apps in this category.

    RECOMMENDED MEDIA 

    Revenge Porn: The Cyberwar Against Women

    In a five-part digital series, Laurie Segall uncovers a disturbing internet trend: the rise of revenge porn

    The Cult of the Constitution

    In this provocative book, Mary Anne Franks examines the thin line between constitutional fidelity and constitutional fundamentalism

    Fake Explicit Taylor Swift Images Swamp Social Media

    Calls to protect women and crack down on the platforms and technology that spread such images have been reignited

    RECOMMENDED YUA EPISODES 

    No One is Immune to AI Harms

    Esther Perel on Artificial Intimacy

    Social Media Victims Lawyer Up

    The AI Dilemma

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Can Myth Teach Us Anything About the Race to Build Artificial General Intelligence? With Josh Schrei

    Can Myth Teach Us Anything About the Race to Build Artificial General Intelligence? With Josh Schrei

    We usually talk about tech in terms of economics or policy, but the casual language tech leaders often use to describe AI — summoning an inanimate force with the powers of code — sounds more... magical. So, what can myth and magic teach us about the AI race? Josh Schrei, mythologist and host of The Emerald podcast,  says that foundational cultural tales like "The Sorcerer's Apprentice" or Prometheus teach us the importance of initiation, responsibility, human knowledge, and care.  He argues these stories and myths can guide ethical tech development by reminding us what it is to be human. 

    Correction: Josh says the first telling of "The Sorcerer’s Apprentice" myth dates back to ancient Egypt, but it actually dates back to ancient Greece.

    RECOMMENDED MEDIA 

    The Emerald podcast

    The Emerald explores the human experience through a vibrant lens of myth, story, and imagination

    Embodied Ethics in The Age of AI

    A five-part course with The Emerald podcast’s Josh Schrei and School of Wise Innovation’s Andrew Dunn

    Nature Nurture: Children Can Become Stewards of Our Delicate Planet

    A U.S. Department of the Interior study found that the average American kid can identify hundreds of corporate logos but not plants and animals

    The New Fire

    AI is revolutionizing the world - here's how democracies can come out on top. This upcoming book was authored by an architect of President Biden's AI executive order

    RECOMMENDED YUA EPISODES 

    How Will AI Affect the 2024 Elections?

    The AI Dilemma

    The Three Rules of Humane Tech

    AI Myths and Misconceptions

     

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Related Episodes

    Superalignment - Turtles all the way Down | Cyber Cognition Podcast with Hutch

    Superalignment - Turtles all the way Down | Cyber Cognition Podcast with Hutch

    Host: Hutch

    On ITSPmagazine  👉 https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/hutch

    ______________________

    Episode Sponsors

    Are you interested in sponsoring an ITSPmagazine Channel?

    👉 https://www.itspmagazine.com/sponsor-the-itspmagazine-podcast-network

    ______________________

    Episode Introduction

    In this episode, we will discuss the problem of aligning artificial super-intelligence -- and the recently proposed solution by OpenAI.

    We will begin by discussing the fundamental concepts of artificial super-intelligence and the alignment problem. We will then look at OpenAI's recently proposed solution, the problems associated with this solution, and the benefits of this conversation.

    References

    https://openai.com/blog/introducing-superalignment

    https://www.techrxiv.org/articles/preprint/Administration_of_the_text-based_portions_of_a_general_IQ_test_to_five_different_large_language_models/22645561/1

    https://www.vice.com/en/article/epvgem/the-new-gpt-4-ai-gets-top-marks-in-law-medical-exams-openai-claims

    ______________________

    For more podcast stories from Cyber Cognition Podcast with Hutch, visit: https://www.itspmagazine.com/cyber-cognition-podcast

    Watch the video podcast version on-demand on YouTube: https://www.youtube.com/playlist?list=PLnYu0psdcllS12r9wDntQNB-ykHQ1UC9U

    AdaGPT and Grace Hopper

    AdaGPT and Grace Hopper

    Hello and welcome to the latest episode of A Chat with ChatGPT, where we will explore the fascinating history of computer science and AI. In this episode, I'm joined by Ada, a ChatGPT-based podcast cohost. Together, we'll take a deep dive into the life and legacy of Grace Hopper, and explore how her innovations and contributions helped to shape the modern computing landscape. From her invention of the first compiler to her work on the COBOL programming language, Grace Hopper was a true pioneer in the field of computing, and her story is sure to inspire and delight. So sit back, relax, and enjoy the show!

    Learn more about creating your own chatbot at www.synapticlabs.ai/chatbot

    Website: synapticlabs.ai
    Youtube: https://www.youtube.com/@synapticlabs
    Substack: https://professorsynapse.substack.com/

    Superalignment - Turtles all the way Down | Cyber Cognition Podcast with Hutch

    Superalignment - Turtles all the way Down | Cyber Cognition Podcast with Hutch

    Host: Hutch

    On ITSPmagazine  👉 https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/hutch

    ______________________

    Episode Sponsors

    Are you interested in sponsoring an ITSPmagazine Channel?

    👉 https://www.itspmagazine.com/sponsor-the-itspmagazine-podcast-network

    ______________________

    Episode Introduction

    In this episode, we will discuss the problem of aligning artificial super-intelligence -- and the recently proposed solution by OpenAI.

    We will begin by discussing the fundamental concepts of artificial super-intelligence and the alignment problem. We will then look at OpenAI's recently proposed solution, the problems associated with this solution, and the benefits of this conversation.

    References

    https://openai.com/blog/introducing-superalignment

    https://www.techrxiv.org/articles/preprint/Administration_of_the_text-based_portions_of_a_general_IQ_test_to_five_different_large_language_models/22645561/1

    https://www.vice.com/en/article/epvgem/the-new-gpt-4-ai-gets-top-marks-in-law-medical-exams-openai-claims

    ______________________

    For more podcast stories from Cyber Cognition Podcast with Hutch, visit: https://www.itspmagazine.com/cyber-cognition-podcast

    Watch the video podcast version on-demand on YouTube: https://www.youtube.com/playlist?list=PLnYu0psdcllS12r9wDntQNB-ykHQ1UC9U

    Who Controls AI? With Sendhil Mullainathan

    Who Controls AI? With Sendhil Mullainathan

    The firing, and subsequent rehiring, of OpenAI CEO Sam Altman raises fundamental questions about whose interests are relevant to the development of artificial intelligence and how these interests should be weighed if they hinder innovation. How should we govern innovation, or should we just not govern it at all? Did capitalism "win" in the OpenAI saga?

    Bethany and Luigi sit down with Luigi’s colleague Sendhil Mullainathan, a professor of Computation and Behavioral Science at Chicago Booth. Together, they discuss if AI is really "intelligent" and whether a profit motive is always bad. In the process, they shed light on what it means to regulate in the collective interest and if we can escape the demands of capitalism when capital is the very thing that's required for progress.